*** ysandeep|out is now known as ysandeep | 04:20 | |
*** frenzy_friday is now known as anbanerj|ruck | 04:35 | |
*** ysandeep is now known as ysandeep|intv | 06:25 | |
noonedeadpunk | admin1: you should reference env.d for that. I dunno the reason why it's like that, but for tacker it's mano_hosts https://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/env.d/tacker.yml#L34 | 08:03 |
---|---|---|
noonedeadpunk | I wonder if it work actually... | 08:03 |
jrosser | the really edge-case roles don't get entries in openstack_user_config | 08:04 |
jrosser | and it is really a very very long time since anyone worked with tacker | 08:04 |
noonedeadpunk | ok, so seems we deploy only api server, not conductor part | 08:13 |
noonedeadpunk | which makes me think it's a legacy deployment https://docs.openstack.org/tacker/latest/user/architecture.html#legacy-tacker-implementation | 08:14 |
noonedeadpunk | probably worth impementing also conductor, but it's super hard for me to read their docs as I can hardly understand half of the terminology used.... | 08:15 |
jrosser | looks like another case where the internal components want to talk to things inside VM | 08:17 |
jrosser | wonder how thats implemented | 08:17 |
noonedeadpunk | but I bet it would need it's own network :) | 08:23 |
*** ysandeep|intv is now known as ysandeep | 08:51 | |
*** ysandeep is now known as ysandeep|lunch | 08:58 | |
admin1 | thanks guys .. i literally looked everywhere and the docs .. but forgot to look into the env.d samples :) | 09:10 |
admin1 | i was looking for a good system to do the l | 09:19 |
*** ysandeep|lunch is now known as ysandeep | 09:21 | |
admin1 | sorry for the half sentence | 09:25 |
admin1 | i was looking for a system that allows me to do logging ( graylog, filebeat etc ) .. but without effort .. but found this .. https://wazuh.com/ -- works out of the box | 09:26 |
noonedeadpunk | I wonder how venus feels nowadays btw... | 10:28 |
noonedeadpunk | doesn't look like it has ever released :( | 10:29 |
noonedeadpunk | oh, it requires agent to be installed - nah.... | 10:29 |
noonedeadpunk | that not smth that can work for us :( | 10:30 |
jrosser | i was thinking we had something with wazuh in the ops repo | 10:31 |
*** dviroel|out is now known as dviroel|rover | 10:48 | |
*** dpawlik7 is now known as dpawlik | 11:56 | |
opendevreview | OpenStack Release Bot proposed openstack/openstack-ansible stable/xena: Update .gitreview for stable/xena https://review.opendev.org/c/openstack/openstack-ansible/+/821547 | 12:39 |
opendevreview | OpenStack Release Bot proposed openstack/openstack-ansible stable/xena: Update TOX_CONSTRAINTS_FILE for stable/xena https://review.opendev.org/c/openstack/openstack-ansible/+/821548 | 12:39 |
opendevreview | OpenStack Release Bot proposed openstack/openstack-ansible master: Update master for stable/xena https://review.opendev.org/c/openstack/openstack-ansible/+/821549 | 12:39 |
*** dviroel is now known as dviroel|rover | 13:08 | |
jamesdenton | good morning | 14:06 |
admin1 | \o | 14:12 |
noonedeadpunk | \o/ | 14:12 |
admin1 | jamesdenton . any new books coming up ? | 14:13 |
jamesdenton | pfft | 14:13 |
noonedeadpunk | revisions of old ones ?:D | 14:13 |
admin1 | i have your signed one | 14:13 |
admin1 | you can do the whole ovn, complete ipv6, vxlan breakout to vlans for some "cloud connect" solutions .. lots of stuff :) | 14:14 |
jamesdenton | it could be nice, as it's always a good opportunity to learn new stuffs. it comes down to lack of bandwidth these days | 14:14 |
jamesdenton | admin1 agreed, would love to see something like that | 14:14 |
jamesdenton | things are starting to free up. my kids are getting a little older and i've got more time for blogs, at least | 14:15 |
admin1 | nice nice . | 14:16 |
jamesdenton | trying to close out 2021 by closing the loop on patches started in 2019 :D | 14:17 |
opendevreview | Merged openstack/openstack-ansible stable/xena: Update .gitreview for stable/xena https://review.opendev.org/c/openstack/openstack-ansible/+/821547 | 14:31 |
*** ysandeep is now known as ysandeep|dinner | 14:34 | |
opendevreview | Merged openstack/openstack-ansible stable/xena: Update TOX_CONSTRAINTS_FILE for stable/xena https://review.opendev.org/c/openstack/openstack-ansible/+/821548 | 14:35 |
opendevreview | Merged openstack/openstack-ansible master: Update master for stable/xena https://review.opendev.org/c/openstack/openstack-ansible/+/821549 | 14:38 |
*** dviroel|rover is now known as dviroel|rover|lunch | 15:01 | |
spatel | jamesdenton +1 i would like to see new book :) let me know if you need help for any testing for scenario testing :D | 15:10 |
jamesdenton | you'll be the first to know | 15:11 |
spatel | ++1 | 15:13 |
*** ysandeep|dinner is now known as ysandeep | 15:14 | |
spatel | noonedeadpunk after upgrading to RabbitMQ 3.8.14 its much stable now. its been a two weeks and still up and running. (i am running with non-HA :) ) | 15:15 |
spatel | noonedeadpunk did you work on patch to create rabbitmq non-HA queue for specific component? | 15:17 |
spatel | no rush just curious so i can use that in my X upgrade | 15:18 |
noonedeadpunk | nope, we haven't done that for X | 15:19 |
spatel | ok | 15:23 |
noonedeadpunk | this would require patching all roles as it implements variable for them | 15:26 |
noonedeadpunk | but I think you can define rabbitmq policy in group vars with quite same effect | 15:26 |
spatel | you are saying make group vars for each component whoever you want to create policy and no policy correct? | 15:28 |
jamesdenton | spatel you're using single queues? | 15:35 |
spatel | single queue? | 15:36 |
spatel | OSA use /vhost to deploy multiple queue | 15:36 |
spatel | jamesdenton ^ is that what you askin? | 15:39 |
jamesdenton | sorry, i mean ha vs non-ha queues | 15:41 |
noonedeadpunk | spatel: you can set no policy overall and set value for specific group, yes | 15:45 |
spatel | jamesdenton i have remove HA policy for all components like nova/neutron/heat etc.. so currently everything is running non-HA in production. (In short my policy section is empty :) ) | 15:46 |
jamesdenton | and what's your feedback on that configuration? | 15:47 |
spatel | noonedeadpunk good to know i will give it a try and if works then we should put small snippet in offical doc about how to that | 15:47 |
spatel | jamesdenton everything works well. i did some failover testing also and it works i didn't see any issue at all. | 15:47 |
spatel | i have killed rabbitMQ on one of node and automatically queues get created on other remaining nodes. (yes i lost data but RPC calls doesn't have any data :) ) | 15:48 |
noonedeadpunk | I would prefer adding variable for that, but this workaround should still work | 15:49 |
spatel | if someone trying to create/delete vm and during that time node die then you will see ERROR mesg but if you try again you will get success | 15:49 |
spatel | i have noticed neutron has 3x queues compare to other services. neutron is biggest consumer of rabbitMQ - If we go OVN then it will reduce lots of issue of rabbitMQ. | 15:50 |
spatel | noonedeadpunk i would also like to have small variable which control HA vs non-HA (for simplicity ) | 15:52 |
noonedeadpunk | nah, I don't think we will do that... as eventually you need to fully set a policy for that. | 15:52 |
noonedeadpunk | Oh, well, at least with current (obsolete) way of ha | 15:52 |
spatel | why are we creating keystone /vhosts ? keystone doesn't use rabbitMQ | 15:54 |
jrosser | i think that andrewbonney was taking a look at the rabbitmq connection stuff very similar to how he did for galera | 15:54 |
jrosser | as the number of connections is kind of surprisingly large | 15:54 |
spatel | I have 5685 queue for neutron in 250 compute nodes, that is very large number. | 15:56 |
spatel | 4375 total connection for neutron service | 15:56 |
spatel | almost 20 TCP connection from each compute nodes. | 15:59 |
jrosser | well like i say there are sort of similar concepts of connection pool and per thread/process settings | 16:01 |
jrosser | and if these are not appropriate then the numbers will get large, quickly | 16:01 |
spatel | agreed | 16:02 |
*** dviroel|rover|lunch is now known as dviroel|rover | 16:12 | |
noonedeadpunk | oh, I never looked into this tbh... | 16:17 |
noonedeadpunk | well, I don't experience any rabbit issues nowadays though as well... | 16:18 |
jrosser | we're having quite some trouble | 16:18 |
jrosser | exhaustion of fd somewhere inside/below oslo.messaging in compute nodes | 16:19 |
noonedeadpunk | ouch, yeah, then it's understandable how you started looking into that | 16:20 |
jrosser | nova-compute is doing something that we can't reproduce with a test program yet | 16:23 |
admin1 | anyone using kata containers with openstack ? can provide a few pointers ? | 16:28 |
*** ysandeep is now known as ysandeep|out | 16:31 | |
spatel | jrosser i have only basic components in my cloud so not using many fd connection. possible you have all other components chewing up more resources like celiometer, magnum etc.. | 16:47 |
jrosser | spatel: it's really about the compute node configuration i think, not to do with the other components | 16:55 |
spatel | ok | 16:56 |
jrosser | oh this is good https://review.opendev.org/c/openstack/openstack-ansible/+/821476 | 17:25 |
spatel | good in sense of? | 17:51 |
admin1 | we will have xena soon :D | 17:51 |
spatel | its been a while X is put but i can understand, we are little behind. | 17:52 |
jrosser | not really behind at all https://releases.openstack.org/yoga/schedule.html#y-cycle-trail | 17:58 |
*** sshnaidm is now known as sshnaidm|afk | 19:06 | |
opendevreview | Merged openstack/openstack-ansible-tests stable/xena: Update TOX_CONSTRAINTS_FILE for stable/xena https://review.opendev.org/c/openstack/openstack-ansible-tests/+/820846 | 19:30 |
*** dviroel|rover is now known as dviroel|rover|afk | 21:02 | |
spatel | jrosser :) good to know | 23:00 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!