admin1 | \o | 09:01 |
---|---|---|
noonedeadpunk | o/ | 09:38 |
damiandabrowski | hey guys, just fyi: i injured my shoulder on a snowboard yesterday. hopefully i will be able to back to work in a week but we will see | 10:03 |
admin1 | damiandabrowski, happy netflix binging .. | 12:45 |
spatel | noonedeadpunk i got discount code, i told them about contribution and they told you qualify for code | 13:14 |
noonedeadpunk | ah, sweet | 13:14 |
noonedeadpunk | good news then) | 13:14 |
spatel | yep!! | 13:26 |
spatel | next week i will book hotel and flight | 13:26 |
spatel | will see you there.. | 13:26 |
spatel | noonedeadpunk are you presenting ? | 13:26 |
noonedeadpunk | nope | 13:29 |
noonedeadpunk | My talk was not approved, but I think it's for the best, as it will be combined with PTG, so assume having no time for it anyway | 13:29 |
noonedeadpunk | I hope that at least OSA onboarding will be approved... | 13:30 |
noonedeadpunk | jrosser: hm, I wonder how are we going to access facts from other hosts now, since they're not gonna be added to hostvars? | 14:14 |
noonedeadpunk | oh maybe it's already covered... | 14:24 |
jrosser | Isn’t it something like hostvars[host].ansible_facts[‘thing’] | 14:31 |
jrosser | tbh I am really not sure about if what I did in the swift patch is right or not | 14:31 |
jrosser | the code is pretty odd | 14:31 |
spatel | noonedeadpunk +1 my company told me to give talk but i think i am not ready yet.. may be next year. | 14:39 |
noonedeadpunk | spatel: it's already quite late to apply :D | 14:39 |
spatel | Yesss all booked | 14:39 |
spatel | next year i will try to do some showoff stuff about what i am doing in my cave | 14:40 |
spatel | We are building new datacenter soon and trashing older one. | 14:41 |
spatel | Developer started working on dpdk based guest deployment so hope will see some light end of tunnel. | 14:41 |
lowercase | jrosser: https://paste.opendev.org/show/bdT18m6F0reIlrEKG2pg/ | 15:03 |
lowercase | in case anyone ever reports this issue. | 15:03 |
lowercase | The answer is. | 15:03 |
lowercase | rm /root/.ansible/plugins/connection/ssh.py | 15:03 |
lowercase | So what's happening is ansible is pulling plugins from different sources. the ssh.py in the /root/.ansible is is from early 2021, so maybe plugins were handled different back then. We saw in 2021 that chroot path was removed from ssh.py and no longer needed. Thus the error. After spending two days tracking it down. It was as simple as purging the local ~/.ansible roles/collections | 15:05 |
noonedeadpunk | Well... It never supposed to be in ~/.ansible/plugins | 15:19 |
noonedeadpunk | Oh, well, if someone has ran tox on production - that could result in this happening | 15:20 |
lowercase | I don't recall ever running tox. | 15:23 |
noonedeadpunk | jrosser: btw on Xena I don't see anything weird with add-compute.sh. Except this `echo` /o\ https://opendev.org/openstack/openstack-ansible/src/branch/master/scripts/add-compute.sh#L52 | 15:38 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Drop `echo` from add-compute.sh script https://review.opendev.org/c/openstack/openstack-ansible/+/877813 | 15:44 |
noonedeadpunk | ^ this one... So insulting.... | 15:45 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-openstack_hosts master: Add openstack_hosts_file tag https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/877824 | 16:34 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Add documentation on refreshing hosts file https://review.opendev.org/c/openstack/openstack-ansible/+/877825 | 16:40 |
Stephen_OSA_ | Good afternoon | 17:09 |
Stephen_OSA_ | Id like to start and say thanks, I like what OSA can do. | 17:10 |
Stephen_OSA_ | I have a question about magnum | 17:10 |
Stephen_OSA_ | When deploying clusers, my master for kubernetes has a container running a service that is failure trying to auth to keystone with http, but the odd thing is it first makes a request to https and succeeds, then subsequently it tries http and fails twice, and this pattern repeats | 17:10 |
noonedeadpunk | o/ | 17:23 |
noonedeadpunk | I think I saw that in our deployments as well... But just let that happen. To be frank I don't really know the answer, but at the same time I did not dig deep enough to understand the reasons behind the issue | 17:24 |
Stephen_OSA_ | it appears to prevent that service from continuing to the next step ;) | 17:28 |
Stephen_OSA_ | so the cluster creation times out | 17:28 |
noonedeadpunk | aha. no, that was not the case with us then :D | 17:30 |
noonedeadpunk | I just see keystone auth failures in magnum logs for trustee or smth like that | 17:30 |
Stephen_OSA_ | Yeah I was getting those too, but I probably got them at an earlier stage, from what I can tell, magnum\openstack expects magnum to be on the public endpoint, so moving magnum to a public place(with external LB) solved this issue. | 17:32 |
noonedeadpunk | I think the only valuable thing we have outside of the defaults - https://paste.openstack.org/show/bVO37HNFkEVeazTF75pC/ | 17:32 |
Stephen_OSA_ | I did manually try to set magnum to internal but no luck, and probably doesnt make since, becuase it needs the internet for pulling all the container deps. | 17:33 |
noonedeadpunk | I can also recall patching communication through correct endpoints quite long ago though | 17:33 |
noonedeadpunk | I think it was that - https://opendev.org/openstack/openstack-ansible-os_magnum/src/branch/master/templates/magnum.conf.j2#L66 | 17:34 |
noonedeadpunk | nah, it was smth different | 17:36 |
noonedeadpunk | probably I was thinking about https://opendev.org/openstack/openstack-ansible-os_heat/commit/288634ce0bf042bed614b3f764753d7b65a7170f | 17:37 |
noonedeadpunk | It was also affecting magnum iirc | 17:38 |
jrosser | Stephen_OSA_: my team did a lot of work on magnum and endpoints which is basically a big mess - we ended up making some bugfixes to heat as well | 17:54 |
Stephen_OSA_ | ahh is that recent jrosser? | 17:54 |
jrosser | Stephen_OSA_: we don't actually run magnum currently but i may be able to look through my notes next week to see what was involved | 17:54 |
Stephen_OSA_ | and thanks btw deadpunk, yeah I think have those changes you listed | 17:54 |
Stephen_OSA_ | yeah that would be cool, thanks jrosser | 17:55 |
jrosser | it was a while ago tbh but there are a number of settings involved iirc | 17:55 |
jrosser | Stephen_OSA_: will you be around in irc next week? | 17:55 |
Stephen_OSA_ | I have been tinkering with this off and on for a couple weeks, had to shelf it, since I wasnt making progress but I keep i coming back to it like and addition | 17:55 |
Stephen_OSA_ | yeah Ill try to start logging in | 17:56 |
jrosser | our trouble was that the keystone url being injected into the master node for the heat callback was the interal url | 17:56 |
jrosser | when it was not possible to call that endpoint from a user vm (quite sensibly) | 17:57 |
Stephen_OSA_ | hmm yeah I am having a similar issue, where the url is public at first, but then subsequent calls use the wrong protocol(http) vs https, which doesnt pass my proxy ; ; | 17:57 |
jrosser | ^ http proxy? | 17:58 |
Stephen_OSA_ | yes | 17:58 |
jrosser | oh ha you are in the same position as me | 17:58 |
Stephen_OSA_ | its trying to hit my external haproxy, but it cant reasonably server http and https on port 5000 | 17:58 |
Stephen_OSA_ | interesting | 17:58 |
jrosser | ok well next week i will check over what we did | 17:59 |
jrosser | tbh i did give up in the end :/ | 17:59 |
jrosser | but we got it to the point of deploying | 18:00 |
Stephen_OSA_ | ;) I am trying to hehe, but I love the ideal so much...though even if I get this working I have a lot of work, as I originally wanted OSA offline, which I have done, but adding magnum, has made me bring it back online(internet access), that great, Id be interested in what ever you find. | 18:00 |
jrosser | oh that is nice to have an offline install | 18:01 |
jrosser | i made some useful patch if you use git mirrors btw | 18:02 |
jrosser | https://github.com/openstack/openstack-ansible/commit/df4758ab1b68a0aa0eb850fbafc6433eb1acfd0b | 18:03 |
jrosser | theres also now a way to do the same for the OSA ansible roles and also ansible collections all to come from local mirrors | 18:04 |
Stephen_OSA_ | nice, Ive bookmarked that, yeah Ill have to refactor using that approach... ;) | 18:07 |
jrosser | Stephen_OSA_: for debugging the magnum thing really all to do is grep through all the data dropped through cloud-init for the container agent and try to find the "wrong" urls | 18:07 |
jrosser | or the word 'internal', because it might also be using the service catalog to determine the endpoint | 18:08 |
Stephen_OSA_ | Ill have to look into cloud-init I havent messed with that before, Ive been jumping on the master and looking at journalctl. Yeah the interesting thing is, I presume the service catalog, should match openstack endpoint list? which doesnt have an http version of my endpoint, which is what I expect | 18:10 |
jrosser | it's useful to look through how all the data lands in the master node | 18:11 |
jrosser | theres a slew of shell scripts (i had to submit patches to magnum to make those support proxies as well) | 18:12 |
jrosser | so it could easily be that they've broken those as i know there is no formal testing with a proxy | 18:12 |
Stephen_OSA_ | ahh, yeah I imagine it complicates things | 18:13 |
jrosser | Stephen_OSA_: also this but i've not finished it yet https://review.opendev.org/c/openstack/openstack-ansible/+/870820 | 18:18 |
jrosser | if there are any other difficulties with offline deployments then i'm interested in those | 18:18 |
Stephen_OSA_ | yeah the big things were acquiring the packages and pip files but perhaps there was a better way. I recall I had to pull the baseline container image and reference it locally vs, pulling it from online. Also so packages like vnc? I think I had to pull that, as its used to access the vms from the web page for example. Id like to do a blog about it some day... | 18:21 |
Stephen_OSA_ | for cloud init and magnum, is that out of the box configured, with OSA? | 18:23 |
Stephen_OSA_ | The packages and pip files are host on my offline deployment node, so that works for me.... and the repo container that is build on deployment, I saved it, so that it doesnt have to be rebuilt each deployment(pulling items form the internet) I now just deploy that container...but your way of having the repos offline should handle some of it. | 18:25 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!