noonedeadpunk | mornigns | 06:27 |
---|---|---|
damiandabrowski | hi! | 07:47 |
noonedeadpunk | just in case - this keystone "trim" thing has been backported even to Zed: https://review.opendev.org/c/openstack/keystone/+/890417 | 10:03 |
noonedeadpunk | patch with a fix "should work" https://review.opendev.org/c/openstack/keystone/+/890936 | 10:04 |
noonedeadpunk | once it lands, we likely don't need https://review.opendev.org/c/openstack/openstack-ansible/+/889781 | 10:05 |
noonedeadpunk | as we do nothing wrong - password limit is 72 symbols in bcrypt, not 54 like was done in keystone patch | 10:05 |
damiandabrowski | nice work noonedeadpunk | 10:15 |
damiandabrowski | btw. i'm curious, why did you put '96' here? :D | 10:15 |
damiandabrowski | https://review.opendev.org/c/openstack/keystone/+/890936 | 10:15 |
noonedeadpunk | jsut random number. Test in topic is expected to raise an exception | 10:16 |
noonedeadpunk | (random number that is higher 72) | 10:16 |
damiandabrowski | ack, thanks | 10:17 |
damiandabrowski | i got confused because previously it was 64 for both max_password_length and invalid_length_password | 10:17 |
noonedeadpunk | I've also looked into support of bcache_sha256 that does not have limitation on password length | 10:17 |
noonedeadpunk | https://review.opendev.org/c/openstack/keystone/+/891024 | 10:17 |
noonedeadpunk | that actually could be a good replacement for just bcrypt | 10:18 |
noonedeadpunk | *bcrypt_sha256 | 10:18 |
noonedeadpunk | damiandabrowski: well, if max_password_length does not apply to bcrypt if it's longer then BCRYPT_MAX_LENGTH anyway... | 10:21 |
noonedeadpunk | But yeah, might be that was an idea actually | 10:21 |
noonedeadpunk | good point actually | 10:22 |
noonedeadpunk | I wasn't thinking from that angle | 10:22 |
Vito | Hello Everybody ! In a openstack ansible environment, after adding a new compute host, It appears in "openstack compute service list" but not in "openstack hypervisor list " what could be wrong ? Thanks for your help | 13:19 |
noonedeadpunk | Vito: hey! and is service appears as UP in compute service list? | 13:22 |
noonedeadpunk | Usually, when service is not in hypervisor list, nova-compute is down | 13:23 |
Vito | But it is shown Up in "compute service list" so this is weird | 13:25 |
noonedeadpunk | isn't accidentally this compute host has the same hostname as some other old decomissioned one? | 13:28 |
noonedeadpunk | As there could be a conflict while registering new compute, that it's uuid does not match with the hostname of previous one in placement | 13:29 |
noonedeadpunk | but then in nova-compute logs you should see quite explicit error about that | 13:30 |
noonedeadpunk | it it's worth checking them regardless | 13:30 |
Vito | Nope it's another hostname, but i have a weird network setup this new compute node is in another network (ip 192.168), but i created route it can reach all ips of the first node network (ip 172.16.xx), so it's maybe an unusual setup i have done here | 13:32 |
Vito | ok now the new compute appears down, and i have the error in nova-compute " Compute nodes ['93d12d39-9ac2-463a-9bff-6fc02b5f9659'] for host vmfarm12 were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning." | 13:34 |
noonedeadpunk | well, this record is fine for the first service startup | 13:36 |
noonedeadpunk | so nova-compute must be able to communicate with nova-conductor through rabbitmq | 13:36 |
noonedeadpunk | and with other computes via various protos for migrations | 13:36 |
noonedeadpunk | but hypervisors and service list are reported and recorded kinda independently iirc. | 13:41 |
noonedeadpunk | as they can also have different hostnames (fqdn vs short one) | 13:42 |
Vito | yes the rabbit mq is joinable from the nova compute node (i tried telnet with ip + port ) | 13:49 |
Vito | @noonedeadpunk the issue was the ceph storage, not joinable from my new compute host, so now the hypervisor is in the list | 15:01 |
tuggernuts1 | Hey ya'll working on getting a antelope deployment up and I'm having some trouble with my neutron deployment. I think I have the api deploying on my infra nodes correctly now but for whatever reason northd is not honoring my is_metal. I'm putting this: https://pastebin.com/ucLnctZn in env.d but I'm pretty sure I have a bug here. | 19:25 |
tuggernuts1 | also https://pastebin.com/AabdKgqf | 19:26 |
mgariepy | your metal thing i think it's a yaml issue. https://github.com/openstack/openstack-ansible/blob/master/inventory/env.d/neutron.yml#L88-L95 | 19:38 |
mgariepy | wrong indentation or something or you did paste it wrong in pastebin | 19:38 |
tuggernuts1 | k I figured it was probably wrong | 19:41 |
tuggernuts1 | I don't get any containers anymore it looks like but I'm still not getting anything under network-northd_hosts in my inventory | 19:49 |
tuggernuts1 | I really don't understand what I'm missing here lol | 19:49 |
tuggernuts1 | how can I get this northd deployment not in a container and have all my infra nodes be where it gets deployed? | 19:50 |
mgariepy | i do have: network-northd_hosts group in my openstack_user_config.yml. | 19:52 |
mgariepy | do you have that ? | 19:52 |
mgariepy | did you paste your config somewhere? | 19:52 |
tuggernuts1 | moment I'm getting rate limited by pastebin I can get you all that | 19:52 |
tuggernuts1 | I do have it in my user_config yes | 19:52 |
mgariepy | https://paste.openstack.org/ | 19:53 |
tuggernuts1 | https://paste.openstack.org/show/bRsAtanf6TRXDSh3cCm8/ | 19:54 |
tuggernuts1 | oic you said network-northd_hosts and I have neutron_ovn_northd | 19:55 |
mgariepy | https://github.com/openstack/openstack-ansible/blob/master/inventory/env.d/neutron.yml#L130 | 19:56 |
mgariepy | try with it. | 19:57 |
tuggernuts1 | thanks for the help it's running atm | 19:59 |
tuggernuts1 | ok it's timing out on me but it seems it's still trying to talk to a container | 20:01 |
tuggernuts1 | https://paste.openstack.org/show/bl3fmKTAI0cKBiMWAqdN/ | 20:01 |
mgariepy | progress. | 20:02 |
mgariepy | needs to update the is-metal stuff now. | 20:02 |
mgariepy | maybe your inventory.json file will need to be modified a bit. | 20:02 |
tuggernuts1 | I deleted it and ran it again | 20:03 |
mgariepy | ouch. | 20:03 |
tuggernuts1 | it's all good I don't really need it | 20:03 |
tuggernuts1 | https://paste.openstack.org/show/bjVWQBIXzUg9sPe0D9ws/ | 20:03 |
tuggernuts1 | that's the task I'm running atm to see how the groups are getting built | 20:03 |
tuggernuts1 | I just blew away this environment so nothing is actually deployed yet | 20:04 |
mgariepy | on this i got to go. | 20:04 |
mgariepy | ok. | 20:04 |
mgariepy | maybe the is_metal stuff is not quite right and needs some fix. i don't really know. | 20:05 |
mgariepy | i prefer to run my services inside LXC :D | 20:05 |
tuggernuts1 | I would if I could unfortunately I won't ever get any working networks if I do that | 20:05 |
tuggernuts1 | oh snap! @mgariepy https://paste.openstack.org/show/bUHW8OCzCTJeXJ5RNjSl/ | 20:16 |
tuggernuts1 | adding that I _think_ worked! | 20:16 |
mgariepy | woohoo. | 20:44 |
tuggernuts1 | running a full deploy atm so hopefully it finishes | 20:45 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!