15:00:05 #startmeeting kolla 15:00:09 Meeting started Wed Jan 29 15:00:05 2020 UTC and is due to finish in 60 minutes. The chair is mgoddard. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:13 The meeting name has been set to 'kolla' 15:00:16 #topic rollcall 15:00:19 \o 15:00:21 o/ 15:03:14 mnasiadka, hrw, yoctozepto: around? 15:03:24 o/ 15:04:44 #topic agenda 15:05:12 * Roll-call 15:05:14 * Announcements 15:05:16 * Review action items from last meeting 15:05:18 * CI status 15:05:20 * Ussuri release planning 15:05:22 * HAProxy tag - dougsz 15:05:24 * Ussuri goal: https://governance.openstack.org/tc/goals/selected/ussuri/project-ptl-and-contrib-docs.html 15:05:26 * CI scenarios: https://etherpad.openstack.org/p/KollaAnsibleScenarios 15:05:28 #topic announcements 15:05:32 Anyone have any? 15:05:44 * yoctozepto devstacking 15:06:04 nope 15:06:31 congrats yoctozepto again for his devstack journey :) 15:06:35 #info yoctozepto now devstack core. All devstack queries to him 15:06:44 No action items last time 15:06:45 :D 15:06:47 thanks 15:06:48 #topic CI status 15:07:13 Whiteboard says rocky kolla is broken 15:07:19 post-failure (fluentd) 15:07:39 I *think* it is fixed now... 15:07:48 should be fixed 15:07:59 looks ok 15:08:22 indeed 15:08:23 kayobe rocky has an issue, fix proposed 15:08:41 #topic Ussuri release planning 15:08:49 k-a clear as well 15:08:54 ah, topic changed ;d 15:08:56 I've been spending some time with CentOS 8 recently 15:09:06 yeah, gj with that :-) 15:09:09 made a status etherpad 15:09:11 #link https://etherpad.openstack.org/p/kolla-centos8 15:09:15 and coordinating on the ml 15:10:03 yeah, nice team work. would have been nice to add a [kolla] tag though 15:10:45 yeah, but I also match on my fellow stackers :-) 15:10:47 with our current patches I think we should be able to get the majority of deploy jobs passing 15:11:03 yeah, just need some reviews 15:11:04 from my side, rabbitmq is ok, just waiting for tripleo-common patches. elk stack is on the way. 15:11:10 might be able today 15:11:13 hoping so 15:11:28 tired of research stuff 15:11:38 what are the tripleo-common patches? 15:11:39 (phd in progress) 15:11:56 mgoddard: it's one of their repos 15:12:03 yeah I know 15:12:08 patches for template-overrides. https://review.opendev.org/#/q/topic:bp/rabbitmq-version-upgrade+(status:open+OR+status:merged) 15:12:09 where they override our templates aiar 15:12:10 is it for their template overrides/ 15:12:20 afair* 15:12:55 no need to ask any questions 15:13:02 * yoctozepto delivers answers before questions 15:13:15 so related question, how strongly should we pin rmq versions? 15:13:22 3.8.2 seems quite specific 15:13:32 i'd go with 3.8.x 15:13:36 if there was a 3.8.3 would we really want to ignore it? 15:13:38 but rmq is not stricly semver 15:13:44 that's what I did at start, 3.8.* 15:13:48 as they drop erlang support with minors 15:13:58 :) 15:14:00 I'd go 3.8.x anyway 15:14:08 if we all agree to go with 3.8.x, I'll do it 15:14:08 why did you add .2? 15:14:13 do it! 15:14:20 yes, why?! 15:14:26 :D 15:14:43 I thought someone will give -1 for 3.8.x :))) 15:14:52 osmanlicilegi: who?! names! 15:15:10 secret! :) 15:15:19 we'll deal with 'em 15:15:27 with one core 15:15:39 btw I'm ok with 3.8.x, will be in ci tonight 15:15:43 cool 15:15:53 +1 15:15:58 so another question related with elk stack 15:16:06 to complicate things, are you working on CentOS 8? 15:16:08 I was planning to pin it to 6.8.6 15:16:18 because we will drop c7 in master soon 15:16:20 why exact again? ;d 15:16:33 I will not pin it too :) 15:16:40 haha, ok 15:17:31 the question for rabbitmq is, ci fails on tripleo-build-containers-centos-7 job 15:17:53 yeah, they need their patches 15:17:55 I'm here if there were any questions to me, had an internet outage at home ;) 15:17:59 and I think we cannot make tripleo-build-containers-centos-7 non-voting 15:18:01 hi mnasiadka 15:18:01 mnasiadka: :O 15:18:10 mnasiadka: no questions so far 15:18:32 osmanlicilegi: we can't permanently 15:18:36 osmanlicilegi: ask tripleo guys to work with you on a fix in their almighty template overrides? 15:18:41 we disable it very rarely 15:18:49 when things are really bad on ooo side 15:18:53 so not now 15:19:58 mnasiadka: I often ask on #tripleo but no one replies. I'll keep trying 15:20:36 anyway, got my notes for rmq and elk 15:20:50 osmanlicilegi: our ooo ambassador is cloudnull 15:21:03 o/ 15:21:15 I was about to type 'he's usually responsive' 15:21:23 * cloudnull regrets raising his hand 15:21:38 :D 15:21:47 :) 15:22:10 * cloudnull reading back , 15:22:13 something I can help with ? 15:22:35 cloudnull: I'll need your help for https://review.opendev.org/#/q/topic:bp/rabbitmq-version-upgrade+(status:open+OR+status:merged) 15:22:57 osmanlicilegi: do you need to add Depends-On in your patch? 15:23:21 (and make it 3.8.*) 15:23:23 then it will pull in the tripleo patch when running the tripleo job 15:23:24 I'll send another update and will let you know 15:23:56 mgoddard: seems I missed it, thx for reminding 15:23:57 cloudnull: I think you're off the hook for the moment. It was really a general question about how to contact tripleo 15:24:12 thanks for raising your hand :) 15:24:27 cloudnull: thanks for being responsive! 15:25:13 any way to mention without ping in IRC/ 15:25:17 ? 15:25:42 mgoddard: nope, unless you want c**loudnull 15:25:44 (remove **) 15:25:53 or something along these lines 15:25:58 yeah 15:26:02 irc clients are pretty dumb 15:26:11 osmanlicilegi: you had an ELK question? 15:26:16 you can set them to react to other keywords 15:26:28 like infra does for config-core, infra-core, infra-root etc. 15:26:28 ++ if something comes up let me know, im always around and available to provide snark and sarcasm on most subjects :D 15:26:29 mgoddard: got my answer about pinning :) 15:26:46 I thought about kolla-core to call us and have it working :-) 15:26:59 cloudnull: snark and sarcasm is what we feed on! 15:27:06 could be useful, could be annoying :) 15:27:20 mgoddard: personal preference, each core configures it by himself 15:27:22 osmanlicilegi: so you are going with unpinned for ELK? 15:27:28 yeah, back to ELK 15:27:28 mgoddard: yes 15:27:32 please keep unpinned 15:27:35 ok, makes sense 15:27:52 we don't usually pin things, it's just rabbit is a little 'sensitive' 15:28:21 while it's on topic, should we use bintray for rmq as well as erlang? 15:28:36 at the moment we get rmq from packagecloud and erlang from bintray 15:28:43 mgoddard: or just we forgettin to unpin on time ;-) 15:29:00 hmm, weird 15:29:04 https://www.rabbitmq.com/install-rpm.html 15:29:08 rmq provides both compatible 15:29:22 using bintray makes sense to me 15:29:25 mhm 15:30:00 one repo to rule 'em all 15:30:15 epel? we already ruled it out 15:30:16 osmanlicilegi: ELK unpinned means we will currently get 7.x ? 15:30:25 dougsz: good q 15:30:27 yeah, rmq is also on GitHub - but I guess erlang is missing there :) 15:30:38 dougsz: it will be oss-6.x 15:31:01 ok 15:31:11 osmanlicilegi: cool, I have a patch for 7.x if we need it (both kolla and kolla ansible) 15:31:22 can we go 7.x directly? 15:31:29 Upgrade issues 15:31:38 and 6 is fine? 15:31:47 (ah, I remember I already asked you) 15:31:59 6 is not painful 15:32:02 We can, but it either involves going to 6 in between, or spinning up two clusters and ingesting the old one 15:32:03 5->6 ok, 6->7 ok, 5->7 not ok 15:32:13 ah ok 15:32:37 we have discussed providing both images 15:32:46 elasticsearch & elasticsearch7 15:33:01 since monasca seems to want 7 15:33:02 we can upgrade to 7 on v cycle 15:33:19 6.8 is going to be eol in November 2020, so we can go from 5 to 6, and then from 6 to 7 in V 15:33:45 good roadmap for elk 15:33:53 move on? 15:34:17 yep 15:34:41 #topic HAProxy tag - dougsz 15:34:44 dougsz: 15:34:57 So in Rocky something like: kolla-ansible reconfigure -t glance 15:35:26 Was well scoped to Glance (and common services) 15:35:53 In Stein, kolla-ansible reconfigure -t glance can now restart HAProxy, iff the HAProxy config is changed 15:36:15 So an operation with limited scope, can now take out the whole deployment if things go badly 15:36:40 It seems sensible to me that we retain the old behaviour 15:37:42 This changed in stein when we split haproxy config out 15:37:47 hmm 15:37:52 that looks bad indeed 15:38:04 Worth pointing out that this behaviour was intentional, and part of the reason for the change 15:38:33 (although mostly it was to improve haproxy config sanity) 15:38:34 yeah, but it would make sense to allow the user to specify if he wants to restart haproxy or not in the process 15:38:36 The main reason being to break the HAProxy config file up? 15:38:43 yeah, it makes sense to enable glance in haproxy when glance tag is called 15:39:20 It does, but I think the most common operation in a running deployment is to tweak service config files 15:39:23 maybe k-a should have an option like --restart-related-services 15:39:31 which rarely requires reconfiguring haproxy 15:39:48 So i would rather do kolla-ansible reconfigure -t glance,haproxy as an exception 15:39:51 what about scaling out? 15:39:57 osmanlicilegi: shouldn't 'reconfigure' imply that all impacted services will be restarted? 15:40:31 I think there are two schools of thought 15:40:47 I think scaling out is likely to be less common than tweaking service config 15:40:52 1. --tags glance should only touch glance, and I understand this limitation 15:41:10 1 is more expected I believe 15:41:11 2. --tags glance should configure glance and the things it depends on 15:41:29 2 - is it only haproxy atm anyway? 15:41:40 and the common stuff 15:41:47 which has always been the case 15:42:12 ah yeah 15:42:32 well, it would be fine if it needed only a reload 15:42:36 an option which might keep everyone happy is to add some new tags 15:42:40 glance-haproxy-config 15:42:59 so if you want haproxy config you do --tags glance,glance-haproxy-config 15:43:00 let me look up current docs on tags 15:43:06 otherwise, just --tags glance 15:43:46 or even more simple, just remove those tags for the haproxy-config imports 15:43:55 --tags glance,haproxy 15:43:56 nah, no new features around 15:44:13 yeah, that makes sense 15:44:19 and update docs accordingly 15:44:20 That sounds reasonable. My vote is to not implicitly reconfigure HAProxy when a service is reconfigured via tags to reduce the change of taking out the deployment 15:44:22 but that would prevent just generating haproxy config for glance 15:44:28 I like my tags limited 15:45:07 how so? 15:45:11 Is it maybe time to make deploy and reconfigure tasks behave differently? deploy = implicit reconfiguration of dependencies, reconfigure = touch only services listed in tags 15:45:52 also sounds good ^ 15:46:04 +1 15:46:18 -2, feels complicated 15:46:57 i think that might introduce an opportunity to break things down the line and be very difficult to diagnose. if you call a reconfigure --tags svc and the scoped changes to "svc" are successful BUT the cascaded changes, which were not made, later break the deployment down the line 15:47:10 ^ 15:47:47 this is the argument against using tags at all 15:47:48 for the haproxy case I think it's an overkill 15:48:15 mgoddard: yeah, but it's easier to reason when you know all your paths and not need to analyse both deploy and reconfigure 15:48:39 I'm reluctant 15:48:48 though I would accept it if proposed and looking sane 15:49:18 for the haproxy case - 15:49:27 ansible conditionals and tags don't mix too well 15:49:30 can we just remove those tags from haproxy and be done? 15:50:45 we could 15:51:10 Michal Nasiadka proposed openstack/kolla-ansible master: doc: fix bullets in external_ceph.rst https://review.opendev.org/704832 15:51:17 Another issue I noticed with the new HAProxy approach: you cannot customise your main haproxy.cfg template to refer to backends from haproxy/services.d, as haproxy is first started with just the single haproxy.cfg file 15:51:17 is there ever a case where you would only want to modify haproxy config for one service? 15:51:56 i recently had to change the balance algorithm for keystone only 15:52:21 mgoddard: when you deploy it 15:52:28 but you still end up restarting the dummy 15:52:34 I mean more at deployment time 15:52:35 e.g. if you want to roll out changes slowly, keystone, then glance 15:52:42 I don't think it would be a problem, if we would have --tags haproxy to generate configs for all enabled services and restart haproxy 15:53:04 although mgoddard has a case for doing it one by one :) 15:53:28 I'm just worried about removing that fine grained control 15:53:37 it's far from finegrained 15:53:51 well it's more than just all haproxy 15:54:13 if you end up restarting haproxy anyways 15:54:16 whatever it is now, it seems we need to have an option to modify haproxy config for one service at a time (or choose not to) 15:54:23 which is why it's problematic in the first place 15:54:31 then it's pita more than benefit 15:54:34 why is that problematic? 15:54:45 * osmanlicilegi bbl 15:54:48 for dougsz ? 15:54:57 haproxy restart shouldn't be a problem 15:54:58 taking down the deploy 15:55:14 well, you drop connections to all the services 15:55:20 and services and mariadb 15:55:22 and if you want to make fine-grained changes then its necessary 15:56:03 well, if there would be an option for real graceful reload of haproxy - we should be fine then 15:56:13 yeah 15:56:16 maybe there is 15:56:25 I think it accepts SIGHUP 15:56:26 yoctozepto: there is this socket transfer reload 15:56:27 also, if we used a separate haproxy for mysql... 15:56:45 maybe we should look into HUP 15:57:29 yoctozepto: well, maybe we don't need to use haproxy, I think there are more intelligent solutions for load balancing galera 15:57:48 mgoddard: here is the whole story about haproxy reloads: https://www.haproxy.com/blog/truly-seamless-reloads-with-haproxy-no-more-hacks/ 15:58:34 ok 15:58:36 we have 2 mins 15:59:10 not sure we have consensus 15:59:24 but let's move on 15:59:29 It looks like there is also a check mode which can validate the config - not sure if it will catch everything 15:59:45 well we have consensus to change current behaviour at least 15:59:51 #topic Ussuri goal: https://governance.openstack.org/tc/goals/selected/ussuri/project-ptl-and-contrib-docs.html 16:00:13 Before we leave, there is a 2nd ussuri cross-project goal 16:00:25 per-project PTL and contributor docs 16:00:32 we have some, we could have more 16:00:58 Anyone want to take this on? 16:01:06 maybe some fresh contributor could look up the docs and think about what can be improved? 16:01:38 I don't have a problem in writing something there, but I need to have somebody with a fresh mindset tell us what's missing :) 16:01:48 it would be nice if we had docs for contributors, cores and PTL 16:02:40 osmanlicilegi is the most recent core, but he left 16:02:43 if everybody could spent a couple of minutes and add to the Ideas section of the whiteboard what could be improved there - it's a start 16:02:45 (the meeting) 16:03:36 where have these ideas come from? 16:03:57 from mine and yoctozepto's heads - blame us :) 16:04:10 we do not have ideas here. we do as we are told 16:04:21 I moved them below kayobe feature status 16:04:38 mgoddard: :< 16:04:57 Seems like a good plan to line up things that are not yet priorities but could be in future 16:05:05 Or just other things we might want to do 16:06:13 #action mnasiadka to herd cats for contributor guide goal 16:06:20 cryptic action 16:06:24 oh crap, let it be 16:06:26 anyway, we're over time 16:06:27 I'll chase osmanlicilegi 16:06:28 thanks all 16:06:28 xD 16:06:30 #endmeeting