noonedeadpunk | pffff how I am fed up with rabbitmq repos.... | 08:19 |
---|---|---|
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Switch master branch to track stable/zed https://review.opendev.org/c/openstack/openstack-ansible/+/860549 | 08:20 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Switch master branch to track stable/zed https://review.opendev.org/c/openstack/openstack-ansible/+/860549 | 08:38 |
noonedeadpunk | Once almost all branches were fixed they've jsut dropped erlang version I used pffff | 08:48 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server stable/ussuri: Set erlang version to 23.3.4.15-1 https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/862564 | 08:52 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_mistral stable/ussuri: Add mistral-extra in the mistral venv https://review.opendev.org/c/openstack/openstack-ansible-os_mistral/+/849522 | 08:52 |
noonedeadpunk | and also smth is broken with gnocchiclient..... | 08:53 |
noonedeadpunk | Ok, so we have now `2022-10-25 10:30:36.942659 neutron-rpc-server.service 2022-10-25 10:30:36.942 5095 ERROR neutron.common.experimental [-] Feature 'linuxbridge' is experimental and has to be explicitly enabled in 'cfg.CONF.experimental'` | 10:55 |
noonedeadpunk | https://zuul.opendev.org/t/openstack/build/7c32cb9a483b4faf8c553f41ad989f8b/log/logs/openstack/aio1_neutron_server_container-37960ea9/neutron-rpc-server.service.journal-10-30-26.log.txt#15673 | 10:55 |
noonedeadpunk | sweeeeeeeeet | 10:55 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_neutron master: Enable experimental execution of LXB if required https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/862594 | 11:02 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Switch master branch to track stable/zed https://review.opendev.org/c/openstack/openstack-ansible/+/860549 | 11:02 |
noonedeadpunk | smth also has happened to epel infra mirrors, so centos/rocky now in retry error 눈_눈 | 11:03 |
*** dviroel|out is now known as dviroel | 11:29 | |
jamesdenton | i am actively working on the OVN docs, just FYI | 12:49 |
kleini_ | I am glad, I chose OVS | 12:59 |
noonedeadpunk | tbh I feel quite frustrated about ovs itself... I did not had enough time to play with ovn but if to pick between lxb and ovs I would totally prefer lxb (if don't take into attention it's experimental status nowadays) | 13:06 |
noonedeadpunk | ovn out of docs indeed sounds like good improvement, but has it's own low points and can be scaled only up to a point | 13:07 |
noonedeadpunk | as ovsdb still weekest point in terms of scaling from what I heard | 13:07 |
noonedeadpunk | but anyway, we have what we have... | 13:08 |
noonedeadpunk | jamesdenton: thanks a lot! | 13:08 |
noonedeadpunk | #startmeeting openstack_ansible_meeting | 15:00 |
opendevmeet | Meeting started Tue Oct 25 15:00:06 2022 UTC and is due to finish in 60 minutes. The chair is noonedeadpunk. Information about MeetBot at http://wiki.debian.org/MeetBot. | 15:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 15:00 |
opendevmeet | The meeting name has been set to 'openstack_ansible_meeting' | 15:00 |
noonedeadpunk | #topic rollcall | 15:00 |
noonedeadpunk | o/ | 15:00 |
damiandabrowski | hi! | 15:02 |
jamesdenton | hi | 15:03 |
mgariepy | hey ! | 15:04 |
noonedeadpunk | #topic office hours | 15:05 |
* noonedeadpunk checking if any new bugs are around actually | 15:05 | |
noonedeadpunk | Yes, we have one | 15:06 |
noonedeadpunk | #link https://bugs.launchpad.net/openstack-ansible/+bug/1993575 | 15:06 |
noonedeadpunk | But to be frank - I'm not sure overhead worth to fit this specific usecase | 15:06 |
noonedeadpunk | I do wonder if mentioned workaround would work and solve request, so we can document this better instead | 15:07 |
noonedeadpunk | Adri2000: would be great if you could try this out one day and return back to us if that solution of good enough for you | 15:07 |
noonedeadpunk | Ok, moving on. | 15:07 |
noonedeadpunk | I have some reflection about zookeeper that we agreed on PTG to deploy for cinder/designate/etc | 15:08 |
noonedeadpunk | So wha tI was using in one of openstack deployments was quite modified fork of https://opendev.org/windmill/ansible-role-zookeeper | 15:09 |
noonedeadpunk | I tried to merge my changes to it but maintainer was super sceptical about any changes to it as it should be super minimal and simple. But basically it even does not configure clustering properly | 15:09 |
johnsom | You are going with zookeeper over redis? | 15:09 |
noonedeadpunk | johnsom: well, we were doubting between etcd vs zookeeper | 15:10 |
johnsom | We have picked Redis as Octavia and some others need it too | 15:10 |
johnsom | Ah, yeah, Designate can't use etcd | 15:10 |
noonedeadpunk | johnsom: but I think you use tooz for coordination anyway? | 15:10 |
johnsom | tooz group membership doesn't work | 15:10 |
noonedeadpunk | as tooz says that `zookeeper is the reference implementation` | 15:11 |
johnsom | Yeah, it's through tooz | 15:11 |
johnsom | Yeah, zookeeper will work | 15:11 |
johnsom | etcd won't | 15:11 |
noonedeadpunk | Also iirc redis quite pita in terms of clustering? | 15:11 |
noonedeadpunk | well, cinder recommends etcd :D | 15:11 |
johnsom | What isn't a pita for clustering, lol | 15:12 |
noonedeadpunk | Well, zookeeper super simple | 15:12 |
johnsom | FYI: https://docs.openstack.org/tooz/latest/user/compatibility.html#grouping | 15:12 |
noonedeadpunk | yeah, zookeeper seems like most featurful thing? | 15:12 |
johnsom | Yeah, zookeeper is fine. I just wanted to mention that other tools are taking a different path. | 15:13 |
noonedeadpunk | just zookeeper worked out of the box without much hooks. Nasty thing about it is actually java thing I don't like.... | 15:14 |
noonedeadpunk | but other then that... | 15:14 |
noonedeadpunk | btw. In what scenarios coordination is required for octavia? | 15:15 |
johnsom | Octavia->Taskflow->Tooz | 15:15 |
johnsom | Taskflow is also using Redis key/value for the jobboard option | 15:15 |
johnsom | It's a new-ish requirement if you enable the jobboard capabiliity | 15:16 |
noonedeadpunk | and in this case redis is not utilized through tooz? | 15:16 |
damiandabrowski | it may be a dumb question but I see that tooz has mysql driver. So leveraging this may be the simplest option when we already have mysql in place. | 15:16 |
johnsom | It is through tooz, but also key/value store I believe | 15:16 |
damiandabrowski | but i assume there are some disadvantages? | 15:16 |
noonedeadpunk | Well zookeeper also allows key/store | 15:17 |
johnsom | Tooz mysql driver doesn't support group membership either | 15:17 |
noonedeadpunk | so I kind of trying to understand if redis only is required or it can work with any driver that has feature capability from tooz | 15:18 |
johnsom | I have not been deep in the jobboard work, it would probably be best to ask in the #openstack-octavia channel for clarity | 15:18 |
noonedeadpunk | aha, ok, will do | 15:19 |
noonedeadpunk | johnsom: seems it can be zookeeper :D https://docs.openstack.org/octavia/latest/configuration/configref.html#task_flow.jobboard_backend_driver | 15:20 |
noonedeadpunk | damiandabrowski: mysql was quite weird thing for coordination feature-wise | 15:21 |
noonedeadpunk | Eventually galera is async master/master. So weird things can arise | 15:22 |
noonedeadpunk | Also they say `Does not work when MySQL replicates from one server to another` which really sounds like things might go wrong | 15:23 |
johnsom | Yeah, mysql is not a good option, it's missing a lot of features. | 15:23 |
noonedeadpunk | but returning to my original pitch - I think we should create role from scratch instead of re-using pabelanger work.... | 15:24 |
damiandabrowski | thanks for an explanation | 15:25 |
noonedeadpunk | As for clustering his stance was that role deploys default template and you can get another role or post task to set template to correct one... | 15:25 |
noonedeadpunk | with lineinfile | 15:25 |
noonedeadpunk | (or smth like that | 15:26 |
noonedeadpunk | But I kind of feel bad about having multiple things under opendev umbrella that does almost same stuff | 15:27 |
noonedeadpunk | ok, moving next - glance multiple locations issue that we expose for ceph | 15:28 |
noonedeadpunk | damiandabrowski: do you want to share smth regarding it | 15:28 |
damiandabrowski | regarding zuul: poor you, i just noticed all your changes are abandoned :D https://review.opendev.org/q/project:+windmill/ansible-role-zookeeper+AND+owner:noonedeadpunk | 15:30 |
damiandabrowski | regarding glance multiple locations: we already merged 2 changes which should make things better. Later this week I'll try to contact glance guys and ask if it's really necessary to do something else | 15:30 |
damiandabrowski | ah sorry, only one patch was merged so far, it would be awesome to merge second one: https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/862171 | 15:31 |
noonedeadpunk | yeah, I'm not sure about backporting, but I'm open for discussion, as it's not very strong opinion | 15:32 |
damiandabrowski | i don't have a strong opinion either. It's a security improvement but on the other hand we're changing default values... | 15:33 |
noonedeadpunk | I have also pushed some patches for ceph | 15:39 |
noonedeadpunk | So eventually this one https://review.opendev.org/c/openstack/openstack-ansible/+/862508 | 15:39 |
noonedeadpunk | The question here - should we move also tempest/rally outside of setup-opestack to some setup-testsuites or smth? | 15:40 |
noonedeadpunk | or jsut abandon idea of moving ceph playbooks out of setup-openstack/infrastructure | 15:40 |
jamesdenton | might be nice to separate those, as i suspect they're really only used in CI | 15:40 |
noonedeadpunk | Well, not only in CI - we use tempest internally :D | 15:41 |
jamesdenton | we should be | 15:41 |
noonedeadpunk | but it does make sense to me to split them out as well. | 15:41 |
noonedeadpunk | ok then, if it's not only me I will propose smth | 15:42 |
noonedeadpunk | not sure about naming though | 15:42 |
noonedeadpunk | ah, btw, on PTG there was agreement that Y->AA upgrade should be done and tested on Ubuntu focal (20.04). So we should carry it until AA and drop only afterwards | 15:43 |
noonedeadpunk | I think that's all from my side | 15:45 |
damiandabrowski | honestly I'm not a fan of moving tempest/rally outside setup-openstack :/ it's about consistency, at the end of the day they are openstack services | 15:51 |
damiandabrowski | but considering that moving ceph out of setup-infrastructure brings another problems, maybe we should really think about implementing some variable like `integrated_ceph` which controls that? | 15:53 |
damiandabrowski | or if we already say in docs that integrated ceph is not recommended scenario - don't care about it at all | 15:53 |
*** dviroel is now known as dviroel|lunch | 15:56 | |
noonedeadpunk | rally is not openstack service fwiw | 15:57 |
noonedeadpunk | sorry, I'm not really understanding purpose of integrated_ceph variable? | 15:57 |
noonedeadpunk | damiandabrowski: | 15:58 |
anskiy | noonedeadpunk: I believe, it's for doing it like this: https://opendev.org/openstack/openstack-ansible/src/branch/master/playbooks/setup-infrastructure.yml#L31-L33 | 15:58 |
* anskiy is actually a second user of ceph-ansible | 15:59 | |
noonedeadpunk | yeah, but we control if ceph is being deployed or not by having appropriate group in inventory | 15:59 |
anskiy | from what I remember, this thing should be some kind of protection against accidental Ceph upgrade | 15:59 |
noonedeadpunk | if there's no ceph-mon group - you're safe | 16:00 |
anskiy | there is one for me :P | 16:00 |
noonedeadpunk | well... I mean. Then you should define it during runtime each time you want to execute any ceph action which is weird | 16:00 |
damiandabrowski | yeah, small correction: this variable shouldn't define if integrated ceph is used or not, but whether it needs to be a part of setup-infrastructure.yml | 16:01 |
noonedeadpunk | IMO that kind of brings more confusion or I'm not fully realize what behavior this variable should achieve | 16:01 |
damiandabrowski | but idk, as someone who uses integrated ceph in production envrionment, having ceph in setup-infrastructure.yml is not an issue at all :D | 16:02 |
mgariepy | i rarely use setup-infrastructure.yml playbook i prefer running each play one by one. | 16:03 |
damiandabrowski | executing the whole setup-infrastructure.yml or setup-openstack.yml on already running environment is not the safest thing anyway | 16:03 |
noonedeadpunk | folks, I'm actually quite open for suggestions. As you're right damiandabrowski - most tricky part are upgrades | 16:03 |
noonedeadpunk | and we do launch setup-infrastructure/setup-openstack in our run_upgrade.sh script | 16:04 |
noonedeadpunk | that will touch ceph when it should not | 16:04 |
ElDuderino | on that note, question - we run 'setup-infrastructure.yml' regularly, with success (still on rocky) and ran the upgrade scripts from pike to queens to rocky. | 16:04 |
ElDuderino | is there a 'better' way? | 16:04 |
noonedeadpunk | Tbh I don't think we should consider running setup-infrastructure.yml / setup-openstack as bad idea - these must be idempotent | 16:05 |
noonedeadpunk | ElDuderino: depending on better way for what :D | 16:05 |
anskiy | noonedeadpunk: I might miss some good point about danger that is running ceph playbook during upgrades, but: if it is part of OSA (and it clearly is -- there is some host groups defined in openstack_user_config for ceph), then what is the actual problem with that? | 16:06 |
ElDuderino | Noonedeadpunk: sorry, I didn't realize you were all still referring to the ceph bits. Disregard :/ | 16:06 |
mgariepy | yeah i dont run it not because i don't have confidence it won't work. it's just that this way i can control more easyly the time of each run. and split upgrades over a couple of days if i need. | 16:07 |
noonedeadpunk | anskiy: well, it's actually what we're trying to clarify - ceph-ansible is quite arguably a part - we intended to use it mostly for CI/AIO rather then production | 16:07 |
noonedeadpunk | anskiy: so we bump ceph-ansible version, but we don't really test upgrade path for ceph | 16:08 |
noonedeadpunk | so you might get some unintended ceph upgrade when upgrading osa | 16:08 |
noonedeadpunk | *upgrading openstack | 16:08 |
anskiy | well, it looks for me intended: as I've deployed this Ceph cluster via OSA... | 16:08 |
anskiy | I do believe there would be some about Ceph version being bumped in the release notes too, right? | 16:09 |
noonedeadpunk | yup, there will be | 16:10 |
noonedeadpunk | anskiy: so we're discussing patch https://review.opendev.org/c/openstack/openstack-ansible/+/862508 that actually adjusts doc in this way to say that while it's an option, we actually don't provide real support for it | 16:10 |
noonedeadpunk | also it is a bit tighten with uncertanty of future of ceph-ansible | 16:11 |
noonedeadpunk | ok, from what I got damiandabrowski votes to jsut abandon this patch | 16:13 |
anskiy | noonedeadpunk: yeah, I've read their README, but it just sounds too convinient for me: I only provide network configuration on my nodes -- everything else openstack-related is installed by OSA | 16:13 |
damiandabrowski | I completely agree that we should mention in docs that upgrading integrated ceph is not covered by our tests :D | 16:14 |
damiandabrowski | but i also agree with anskiy , when you're having ceph integrated with OSA, then upgrading it with setup-infrastructure.yml isn't really unintended | 16:15 |
noonedeadpunk | anskiy: it's hard to disagree. But like actions that you execute should be clear. And for me it's not always clear that setup-openstack will mess up with your rgw as well | 16:15 |
anskiy | noonedeadpunk: I think the patch is okay, if I could put something in user_variables that says "please_deploy_ceph_for_me_i_know_its_not_tested: true" | 16:15 |
noonedeadpunk | ok, then I think I need to sleep with it to realize value of such variable | 16:17 |
noonedeadpunk | Tbh, for me it would make more sense to have some variable like ceph_upgrade: true in ceph-ansible itself | 16:18 |
noonedeadpunk | (like we do for galera and rabbit) | 16:18 |
noonedeadpunk | that will really solve a lot of pain | 16:19 |
damiandabrowski | technically you have `upgrade_ceph_packages` | 16:19 |
damiandabrowski | https://github.com/ceph/ceph-ansible/blob/371592a8fb1896183aa1b55de9963f7b9a4d24f3/roles/ceph-defaults/defaults/main.yml#L115 | 16:19 |
damiandabrowski | not sure if it fully solves the problem though | 16:20 |
noonedeadpunk | also, I think you're supposed to upgrade ceph-ansible with https://github.com/ceph/ceph-ansible/blob/371592a8fb1896183aa1b55de9963f7b9a4d24f3/infrastructure-playbooks/rolling_update.yml aren't you? | 16:21 |
noonedeadpunk | as I think our playbooks might be dump enough to upgrade it in non-rolling manner... | 16:21 |
noonedeadpunk | yes, we don't have "serial" in ceph playbooks as of today | 16:22 |
damiandabrowski | i never upgraded ceph with ceph-ansible but when I was modifying OSD configs etc., ceph-install.yml always restarted OSDs one by one | 16:22 |
anskiy | noonedeadpunk: so, the solution would be to apply https://review.opendev.org/c/openstack/openstack-ansible/+/862508 and add info to the upgrade docs to not forget to run https://github.com/ceph/ceph-ansible/blob/371592a8fb1896183aa1b55de9963f7b9a4d24f3/infrastructure-playbooks/rolling_update.yml if ceph-ansible version was bumped? | 16:22 |
noonedeadpunk | damiandabrowski: huh, interesting.... | 16:23 |
damiandabrowski | so I'm not sure what rolling_update.yml brings except few safety checks and overriding `upgrade_ceph_packages` value | 16:23 |
anskiy | damiandabrowski: so it's just like osa's upgrade doc, but in yaml :) | 16:24 |
anskiy | and for ceph | 16:24 |
noonedeadpunk | I think what he meant - you can just run ceph-isntall.yml -e upgrade_ceph_packages=true | 16:25 |
noonedeadpunk | and get kind of same result | 16:25 |
noonedeadpunk | so basically - no ceph upgrade will happen unless you provide -e upgrade_ceph_packages=true. | 16:27 |
damiandabrowski | that's at least what i think :D | 16:28 |
damiandabrowski | i also found out how ceph-ansible restarts OSDs one by one | 16:28 |
damiandabrowski | handlers are quite complex there and uses custom scripts | 16:29 |
damiandabrowski | https://github.com/ceph/ceph-ansible/tree/bb849a55861e3900362ec46e68a02754b2c892ec/roles/ceph-handler/tasks | 16:29 |
damiandabrowski | for ex. this one is responsible for restarting OSDs: https://github.com/ceph/ceph-ansible/blob/bb849a55861e3900362ec46e68a02754b2c892ec/roles/ceph-handler/templates/restart_osd_daemon.sh.j2 | 16:29 |
noonedeadpunk | yup, already found that. They did quite extra mile to protect from stupid playbook | 16:30 |
noonedeadpunk | well, I mean. Then we can leave thing as is indeed.... | 16:30 |
noonedeadpunk | as basically ceph-ansible version bump or change of ceph_stable_release won't result in package upgrade unless you set `upgrade_ceph_packages` | 16:31 |
noonedeadpunk | I still like idea thought that ceph-rgw-install should not be part of setup-openstack as it has nothing to do with openstack... But can live with current state as well | 16:32 |
noonedeadpunk | oh, totally forgot | 16:33 |
noonedeadpunk | #endmeeting | 16:33 |
opendevmeet | Meeting ended Tue Oct 25 16:33:08 2022 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 16:33 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2022/openstack_ansible_meeting.2022-10-25-15.00.html | 16:33 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/openstack_ansible_meeting/2022/openstack_ansible_meeting.2022-10-25-15.00.txt | 16:33 |
opendevmeet | Log: https://meetings.opendev.org/meetings/openstack_ansible_meeting/2022/openstack_ansible_meeting.2022-10-25-15.00.log.html | 16:33 |
damiandabrowski | +1 | 16:33 |
damiandabrowski | there's one more thing: I'm on vacation next week | 16:33 |
noonedeadpunk | yup, awesome | 16:38 |
NeilHanlon | o/ sorry I missed the meeting today. wanted to note I'll be away for the next couple weeks, until about 9th November | 16:52 |
noonedeadpunk | sure, no worries! We will try to survive with this craze rhel world in the meanwhile :D | 16:56 |
*** dviroel|lunch is now known as dviroel| | 17:04 | |
*** dviroel| is now known as dviroel | 17:04 | |
*** dviroel is now known as dviroel|appt | 17:27 | |
mgariepy | of course! .. stream_ssl|WARN|SSL_connect: system error (Success) | 19:41 |
*** dviroel|appt is now known as dviroel | 20:39 | |
*** dviroel is now known as dviroel|afk | 21:57 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!