*** queria is now known as Guest4236 | 02:31 | |
*** rlandy is now known as rlandy|ruck | 10:31 | |
tty0 | i'm trying to enable neutron with DNS integration as documented here: https://docs.openstack.org/neutron/wallaby/admin/config-dns-int.html | 10:58 |
---|---|---|
tty0 | any hint why the dns assignment is host-172-99-0-171.openstacklocal instead of my-port as i specify in --dns-name? | 10:59 |
tty0 | here should i start looking after misconfiguration and such? | 11:00 |
tty0 | *where | 11:02 |
*** queria is now known as queria^afk | 11:13 | |
*** jonassavulionis is now known as bacarrdy | 13:00 | |
bacarrdy | Hello everyone, have a quick question. I have installed 2x haproxy with keepalived and configured VIP. Then installed keystone and configured OS_AUTH_URL=VIP. With keystone everything fine i can access it every request, but then i installed glance and keystone_auth_token also configure with VIP www_authenticate_uri=VIP and auth_url=VIP. All endpoints also configured with VIP. So now the problem is that when i requesting something fr | 13:14 |
bacarrdy | om glance then i randomly getting: HttpException: 401: Client Error ... Either you supplied the wrong credentials (e.g., bad password), or your browser does not understand how to supply the credentials required | 13:14 |
bacarrdy | randomly means ~2-5 times answer that i dont have token and 1 time it returns success answer. I have 3 controllers and all them are configure in haproxy. | 13:15 |
bacarrdy | each controller runs glance | 13:17 |
frickler | bacarrdy: how did you deploy your controllers, manually or using some deployment tool? you need to make to to use the same fernet keys for keystone on all 3 of them. https://docs.openstack.org/keystone/latest/admin/fernet-token-faq.html#how-should-i-approach-key-distribution | 13:20 |
bacarrdy | manually, dont want to use any tools | 13:21 |
bacarrdy | flicker: Helped, Now i know that need to rotate them. Will create an script for that. Thank you very much, Saved a lot of time in debugging. | 13:35 |
st | Hi there folks! Are heat templates idempotent? For example, say a resource created by a heat stack has been destroyed, can the heat stack be run again to reach the original state? I'm happy to read over any documentation on this matter. | 15:54 |
st | And I'm happy to go wherever I need to go to ask questions like this. :) | 15:55 |
fungi | st: there's a #heat irc channel which seems to get used for such discussions occasionally, or you can post to the openstack-discuss@lists.openstack.org mailing list and put [heat] at the beginning of your subject line (note that if you don't subscribe to that ml before sending, your message will be held in moderation until someone has a chance to make sure it's not spam and approve it) | 17:36 |
st | Thanks! | 17:37 |
*** sshnaidm is now known as sshnaidm|afk | 17:50 | |
-opendevstatus- NOTICE: mirror.bhs1.ovh.opendev.org filled its disk around 17:25 UTC. We have corrected this issue around 18:25 UTC and jobs that failed due to this mirror can be rechecked. | 18:43 | |
prometheanfire | is there no way to target a specific host when migrating an instance cold, --host seems to also make it a live migration... | 21:09 |
*** rlandy|ruck is now known as rlandy|ruck|afk | 22:21 | |
kplant | https://blueprints.launchpad.net/nova/+spec/cold-migration-with-target-queens | 22:31 |
kplant | looks like it was added in q | 22:31 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!