16:30:22 #startmeeting kolla 16:30:23 Meeting started Wed Feb 3 16:30:22 2016 UTC and is due to finish in 60 minutes. The chair is rhallisey. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:30:24 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:30:26 The meeting name has been set to 'kolla' 16:30:32 #topic rollcall 16:30:34 o/ 16:30:35 hello 16:30:36 o/ 16:30:41 o/ 16:30:43 unicell: ping 16:30:45 o/ 16:30:45 \o/ 16:30:47 hi 16:30:47 o/ 16:30:49 o/ 16:31:37 so sdake will be back in ~15 min 16:31:50 but will continue without him for now 16:31:52 hi 16:31:57 hey Jeff 16:32:00 hi 16:32:16 jpeeler: o/ 16:32:24 inc0, around? 16:32:33 hi every one 16:32:39 o/ 16:32:44 i am back now 16:32:49 but go ahead and run the agenda rhallisey 16:32:53 kk 16:32:53 been going since 3am and i'm beat 16:33:00 #topic Announcements 16:33:11 So midcycle is right around the corner 16:33:23 just want to remind everyone it's Feb 9th and 10th 16:33:28 o/ 16:33:31 sorry I'm late 16:33:37 inc0, no worries 16:33:43 #link https://etherpad.openstack.org/p/kolla-mitaka-midcycle 16:33:48 o/ 16:34:01 just a reminder as to what is likely to be covered 16:34:09 sdake, did you finalize the schedule? 16:34:50 rhallisey no i did not 16:34:55 but I will do so today 16:34:57 kk 16:35:02 its part of my work day 16:35:12 please register if you haven't 16:35:18 ^ yes that too 16:35:33 #topic Kolla-ansible config playbooks 16:35:42 SamYaple, go ahead 16:35:47 will do. thanks rhallisey 16:35:48 I'm excited for this topic 16:35:59 #link https://review.openstack.org/#/c/271126/ 16:36:10 #link https://review.openstack.org/#/c/271126/ 16:36:19 you might not have perms :) 16:36:28 ok 16:36:36 that patchset is a bit discussing the issue 16:36:48 basically what do we do when a config file _changes_? 16:36:56 we need to restart services to pull in that change 16:37:23 the purposed patch does it in line, I dont particularly agree with that approach because then you get service restarts out of the blue 16:37:25 SamYaple agree 16:37:26 or call sighup on certain 16:37:35 inc0: yes on a few we can 16:37:40 inc0 the filelsystem is readonly 16:37:45 but its still a service shutdown (just a much quicker one) 16:37:49 once the config is in the container needs to be restarted 16:37:55 we would need to copy new conf file tho 16:38:00 sdake: no, thats for COPY_ONCE only 16:38:09 SamYaple yes i know 16:38:13 which needs to work :) 16:38:18 default is COPY_ALWAYS afair 16:38:27 no its COPY_ONCE 16:38:32 definately coyp once 16:38:35 ok 16:38:38 my mistake then 16:38:44 we made that decision long ago 16:38:49 but with the changes to allow sighup the community at large is moving to changing configs so well revisit that later 16:38:52 thats not the point of this 16:39:02 back on topic 16:39:02 that was part of the whole blueprint spec thing for ansible - that was required to get people to approve the spec 16:39:23 the config playbook would be similiar to what we do with upgrades 16:39:37 if new config changes are detected it would do a rolling restart to pull in those changes 16:39:40 action - reconfigure? 16:39:47 sure 16:39:52 thats a reasonable name 16:39:53 yeah, I'm cool with that 16:40:07 so well reuse alot of (not yet written) upgrade code I think 16:40:16 to maintain uptime and availability 16:40:22 althouth I'd say we need to find out if config changes and restart only services we need 16:40:30 which should be easy inc0 16:40:35 a quick md5sum compare 16:40:39 so if task template == changed then restart corresponding 16:40:40 we do this in the haproxy container 16:40:47 no that wont work 16:40:55 because deploy changes teh files as they are laid down 16:41:00 reconfigure would need a direct compare 16:41:06 ok, makes sense 16:41:21 so i think we are all on the same page here, just making everyone aware of the issue 16:41:21 still, we need to take that into account 16:41:32 no we dont need to track template == changed at all 16:41:40 since we compare running vs current config 16:41:57 one concern 16:42:00 ok 16:42:03 i want upgrades 16:42:08 really really want upgrades 16:42:12 i know they are related 16:42:26 but i am concerned separating the work will make it harder to implement the ugprades 16:42:26 config work is going to ride the coatail of upgrades 16:42:32 shouldn't htis work come after? 16:42:33 ok wfm 16:42:36 then i'm good 16:42:42 as long as we aren't rollign it all together 16:43:00 unicell: ^^ when you see this 16:43:12 oops wrong topic unicell 16:43:16 ok rhallisey im ready to move on 16:43:27 #topic Kolla-ansible delegate_to vs run_once 16:43:28 okie 16:43:40 ok this one is for unicell 16:43:52 this is a tricky issue ill try to describe quickly 16:44:10 #link https://review.openstack.org/#/c/271148/ 16:44:13 thats a related patch 16:44:26 #link https://bugs.launchpad.net/kolla/+bug/1520728 16:44:27 Launchpad bug 1520728 in kolla "fatal: [openstack002] => One or more undefined variables: 'dict object' has no attribute 'stdout'" [Critical,In progress] 16:44:28 thats the bug 16:44:41 we have had several bugs on this issue and unicell finally tracked it down 16:45:11 Ansible 1.9.4 has an issue with multiple host includes and roles and our inventory 16:45:27 it is fixed in the 2.0 branch, but until 2.0 it will skip certain tasks even if it shouldnt 16:45:42 this makes certain inventory configs not useable with the bootstrap 16:45:50 groan 16:45:51 essentially its broken for certain deploys 16:45:59 any more detail? 16:46:04 ouch 16:46:15 yea but its technical and time consuming 16:46:17 can we get some docs on this ansible 1.9.4 deploy problem? 16:46:23 ill go over the options we have t ofix this 16:46:31 SamYaple, ya that would be good 16:46:32 details can be provided later 16:46:42 ok options then wfm 16:46:45 two options, in no particular order of preference 16:47:28 1) upgrade ansible to 2.0. Lots of software has already made the switch. A 2.0 change is not very painful (ive analyzed the impact) it would just require us to run ansible 2.0 16:47:39 (we can discuss these options in a minute) 16:47:47 1* affects upgrades 16:47:59 but in a way we can handle it I think 16:48:32 2) we can force all specific options that encounter this bug to use the first host in the appropriate group. This would fix it! But it would be if the first node is down, you cant run the playbooks at all 16:48:34 inc0 plan therei s deploy 1.9.4, then deploy liberty, then deploy 2.0.0 then upgrade to mitaka 16:48:54 the second option breaks high-availability in my mind 16:49:06 the first node must always be live or it all breaks 16:49:20 maybe i'm an old fuddy duddy but i hate adopting brand new 2.0.0 software for anssible 2 months prior to release when their test cases are very very weak 16:49:32 if we upgrade to 2.0...do we still need kolla_docker? 16:49:36 inc0: yes 16:49:44 we are running our own module forever 16:49:47 ^^ 16:49:49 ok 16:49:57 for docker* 16:50:02 right 16:50:02 its just too critical 16:50:11 its something we need to own 16:50:16 ok so those are the options we have 16:50:20 neither of them are great 16:50:20 wfm, but kinda agree with Steve, upgrading to 2.0 will affect upgrades of kolla and we're awfully close to Mitaka 16:50:37 ok so lets analyze impact of waiting to 2.0.0 to newton 16:50:38 we could also ignore the issue and just have that be the issue for mitaka 16:50:41 how about option 2, then move to 2.0 later? 16:50:42 ha is slightly degraded? 16:50:46 and fix it properly 16:50:57 sdake: if first node is down, you can't run playbooks at all 16:51:09 certain parts of deployment are causing errors 16:51:10 ok highly degraded 16:51:12 that kind of sucks 16:51:27 is there a third option 16:51:33 not that I know of 16:51:36 such as document what types of deploys will expose this bug 16:51:37 ignore the issue for now? 16:51:44 inc0: yea thats the third 16:51:45 ignore is not a option inc0 16:51:52 hehe 16:51:56 document and ignore would be ok i think :) 16:51:56 so is there more background for state of code in ansible 2.0? was it a complete rewrite? (sorry don't know) 16:52:05 how about use the option 1 and add a pre-check to make sure the first node is up. 16:52:07 but its really hitting the ebay bugs and Hui (and other) 16:52:08 jpeeler pretty much green field rwerite 16:52:14 what if use 2.0 for deploys that exposes bug? 16:52:18 ignore == document and ignore, I mean let's not fix it right now and get this as priority for N 16:52:32 ah, that does make the decision tricky then 16:52:34 dratushnyy: we cant do a 2.0 and 1.9.x compatible structure for kolla 16:52:36 dratushnyy our code can onl yrun on 194 or 2.0.0 16:52:43 if we could we would have done that already 16:52:54 also I'd really like to have 1 release deprecation for deps... 16:53:07 personally, i think we should _not_ move to ansible 2.0 16:53:11 inc0, I'm also thinking door #3 here 16:53:27 but crippling our multinode is bad too 16:53:28 i agree moving to ansible 2.0.0 prior to newton is a bad bad move 16:53:28 SamYaple, we'll have to at some point 16:53:43 we will only encounter more stuff like that 16:53:45 inc0: in newton i plan to do it immediately after midcycle 16:53:45 #1 could ruin upgrades for M #2 is a workaround #3 release with a limiation 16:53:55 inc0: so it has time to sit 16:54:12 sdake, you mean prior to mitaka (not newton) 16:54:14 in newton we could do at start of cycle 16:54:24 SamYaple, wait till we pop newton branch plz 16:54:34 elemoine yes prior to mitaka sorry 16:54:46 ok lets focus guys 16:54:49 inc0: newton branch means we have released newton ;) 16:54:49 and gals 16:54:57 SamYaple, can you list situations in which problem occurs? 16:55:04 SamYaple, you know what I mean... 16:55:06 yes w eneed more details 16:55:12 we can dothis on the mailing list if its too time consuming 16:55:14 let me find unicell bug 16:55:25 SamYaple lets do it on the mailing list 16:55:36 nad i will schedule 20-30 misn at midcycle for this 16:55:42 so we can make a decision then 16:55:49 after we understand what situations its fails under 16:56:06 because that is a critical critical piece of missing information that is needed to make a decision 16:56:15 agreed 16:56:18 ok 16:56:25 SamYaple, you cool with stating a thread? 16:56:27 SamYaple can you take action to start thread? 16:56:40 yea sure 16:56:44 thanks bro 16:56:45 SamYaple, will you start thread? (3 times so you can't refuse) 16:56:48 ;) 16:56:49 probably wait for unicell since this is his issue 16:56:54 :) 16:57:07 ok moving on 16:57:12 #topic Logging with Heka (spec and development) 16:57:19 elemoine, you have the floor 16:57:20 ok, that's me :) 16:57:26 the Heka spec got three +2's 16:57:32 SamYaple quick quesiton 16:57:38 Can I start pushing a patch flow to Gerrit? Or should I wait for a more official approval? 16:57:39 could we somehow make thte code conditional to select the "2nd" node 16:57:40 elemoine, excellent 16:57:43 if the first node was down 16:58:01 yeah it's solid imho 16:58:02 elemoine, I think generally we have all the cores for on specs before we +a 16:58:06 like a config option "damnit select this node" 16:58:16 elemoine, can you link the spec? 16:58:28 apprently i'm lagged 16:58:32 https://review.openstack.org/#/c/270906/ 16:58:38 #link https://review.openstack.org/#/c/270906/ 16:58:45 ele yo uwill get a review from me when this meeting concludes 16:58:46 how about we run heka in pararell to rsyslog - if you run central logging it's heka 16:58:49 elemoine, I don't want to delay it too long waiting on all cores, but would like a little more than 3 16:59:16 so all cores please review https://review.openstack.org/#/c/270906/ 16:59:16 rhallisey we require 6 +2 votes 16:59:16 sure, I'm just wondering if can start working on patches 16:59:19 after the meeting 16:59:24 and push my patch stream to Gerrit 16:59:25 specs require simple majority 16:59:48 fyi, this is the sort of patch stream I c 16:59:53 consider pushing: 16:59:55 https://github.com/openstack/kolla/compare/master...elemoine:heka-gerrit 17:00:17 elemoine dont put it on github get it in the review queue please 17:00:22 elemoine, I'm cool with you start pushing them tbh 17:00:23 even if its not ready 17:00:27 so people dont have to hunt for it 17:00:35 that's exactly my question :) 17:00:45 just mark workflow -1 17:00:48 can I push that to Gerrit before the spec is fully approved 17:00:50 I'm -1 on removing rsyslog just yet tho 17:00:58 ya just -1 it 17:00:59 elemoine you can push to errit before spec is approved 17:01:17 inc0, why? 17:01:19 elemoine i recommend placing a -2 at your top level patch 17:01:26 or marking it [wip] 17:01:37 so someone doesn't accidentally merge it during review 17:01:41 elemoine, backward compatibility (I should change name to Michal "backward compatibility" Jastrzebski) 17:01:44 i wnat heka to go in as one big patch stream 17:01:48 I want to lessen impact on upgrades 17:01:51 inc0 lol 17:01:52 inc0: that is never a requirement 17:02:03 we never said anything about backward compatibly 17:02:03 however, old version doesn't have central logging 17:02:05 Liberty 17:02:07 inc0 we roll forward not back 17:02:12 just like openstack 17:02:19 inc0 i think that really isn't a problem 17:02:19 for rollback of openstack you need DB restore 17:02:31 so I'd say if you run central logging, its heka 17:02:39 rsyslog didn't work fantastically well in liberty anyway 17:02:49 well, I know 17:02:55 not criticising 17:03:00 but if someone runs it they might build on it 17:03:06 that's my point 17:03:14 just indicating - if we can just move it to heka and be done thats one less migration our users have to deal with 17:03:15 we say "rsyslog will disapear completely in N" 17:03:50 elemoine, did you have anything else you wanted to go over? 17:04:00 yes, quick thing 17:04:00 so my personal preference is to have it as default option, but an option for now 17:04:06 for one release 17:04:23 inc0 leave feedback to that statement in hte review please 17:04:29 sure 17:04:31 inc0, people asked me if Heka could replace Rsyslog entirely, 17:04:32 right after meeting 17:04:33 I'm still +2 on spec tho 17:04:38 might actuallybe a good idea 17:04:45 and now the request it to keep Rssyslog 17:04:48 if we had capacity to pull it off 17:05:10 elemoine, just don't change anything with it and put one if statement in configs 17:05:16 rest is the same 17:05:20 inc0, oh I see 17:05:29 I'm first person to say lets replace rsyslog 17:05:31 it's also compatible with my patches 17:05:37 ok understood 17:05:38 let's just have at least some deprecation period 17:05:39 wfm 17:05:51 so 17:05:55 another thing 17:06:05 We will need specific Lua decoders for parsing OpenStack, MySQL and RabbitMQ logs. 17:06:15 We already have these decoders, that we use in other projects (Fuel) 17:06:17 I like lua 17:06:26 Our plan to distribute these decoders as deb and rpm packages in the future 17:06:29 Lua is nice 17:06:39 But if we don't have time to do that by Mitaka 3 (the 4th of March), will it be ok to have these Lua decoders in Kolla, really as a temporary solution? 17:06:41 license is whwat elemoine 17:06:48 elemoine, can we do this in our tree? 17:07:09 need to know the license on all software 17:07:18 I don't want to introduce new repo dependency for what probably will be less than 200 loc 17:07:22 inc0, we have another code base that uses these decoders 17:07:27 they will be our custom decoders or some apche license compatibility? 17:07:30 so we'd like to share them 17:07:37 what license 17:07:37 compatible* 17:07:42 License of the lua plugins is in Fuel is Apache V2 17:07:48 ASL2.0 is what makes my life easy 17:08:02 cool 17:08:20 i dont mind if they hit the repo - 200 lines of bloat for a cycle isnt' bad as long as it doesn't create an upgrade problem down the road 17:08:28 so eval that 17:08:34 start a thread on ml elemoine 17:08:37 it won't be a problem 17:08:53 sdake, ok I will 17:08:55 upgrade needs to be top of mind when people make new design changes ;) 17:08:56 bottom line is 17:09:04 if you can't upgrade it in some way, then dont do it for mitaka please 17:09:09 newton wide open 17:09:13 we'd like to have the Lua decoders outside Kolla 17:09:13 but mitaka we are super stretched 17:09:20 but maybe not for Mitaka 17:09:32 ok i need 10 mins 17:09:39 elemoine, let's start by having them in tree as it's simplest and then we'll revisit ok? 17:09:50 rhallisey ^^ 17:09:52 inc0, exactly what I'm suggesting 17:09:56 maybe 5 17:10:07 Im done 17:10:08 sdake, roger 17:10:16 cool thanks elemoine. Nice job on the spec 17:10:27 folks please review that spec 17:10:34 lets get er looking sparkling pretty 17:10:37 #topic sdake's announcement 17:10:42 like a diamon instead of lump of coal 17:10:44 not announcement 17:10:46 just topic 17:10:47 but 17:10:50 idk 17:11:02 i wanted to check in on how upgrades are coming along for people 17:11:15 can i get a rollcall list of where people are at 17:11:17 i'll go first 17:11:38 glance -> started implementation, will be complete by midcycle, distracted by remodel 17:11:44 heat -> wont be done by midcycle 17:12:06 nova -> I'll rebase before midcycle 17:12:13 docker -> scares the shit out of me 17:12:21 lol 17:12:35 magnum -> just began the work 17:12:37 neutron -> needs thin containers then upgrade. thin containers done and working, but they required docker 1.10 17:12:39 #link https://launchpad.net/kolla/+milestone/mitaka-3 17:12:45 docker 1.10 is due out next week 17:12:50 (or tomorrow0 17:12:55 horizon -> done but need some review. 17:12:56 rabbitmq -> not started yet, will try to fdo before mid, but without promise 17:13:08 nihilifer: there is a rabbitmq patch already 17:13:17 oh... 17:13:21 i just saw free bp 17:13:24 ill handle mariadb 17:13:25 but ok 17:13:31 * rhallisey did not grab an upgrade at devconf this week :( 17:13:31 nihilifer: likely someone didnt grab it 17:13:33 do we upgrade ceph? 17:13:36 will pick something else from infra 17:13:36 inc0: yup 17:13:38 ceph will be funny 17:13:47 ceph will be easy 17:13:51 inc0 we will talk about infrastructure services at midcycle 17:13:55 yeah I don't expect problems there 17:13:57 I think we should jsut ugprade those once per cycle 17:14:00 ive done real --> kolla ceph migration and the kolla --> real! 17:14:03 towards the end 17:14:05 but that is just my opinion 17:14:18 we can have a more thorough discussion 17:14:21 yeah, all is good apart from qemu 17:14:23 thanks for the roll call folks 17:14:28 looks like some progress is being made 17:14:37 qemu is so disruptive that I'd like to keep that separated 17:14:41 try to get atleast one patch in teh queue (and working) 17:14:55 inc0 ok we can talk about upgrades at midcycle 17:15:02 I am ging to shcedule copious sessions :) 17:15:06 lets not beat dead horses 17:15:16 I just want people to be prepapred to discuss it with real-world experiences 17:15:20 sdake: but thats the thing me and inc0 do best! 17:15:23 which it sounds like we are 17:15:32 rhallisey all done thanks 17:15:36 sdake, cool 17:15:36 swift -> started but slow, hope to have something up before midcycle 17:15:41 I don't mind if they're close to dead as well 17:15:41 oh wait 17:15:44 pbourke_ going 17:15:49 v. busy this week before I travel for midcycle 17:16:05 whats that stake? 17:16:13 pbourke_, cool thanks 17:16:15 (heh, auto correct) 17:16:21 pbourke_ i meant you were speaking now 17:16:32 mistake ;) 17:16:33 ok that's me done :) 17:16:33 thats my other nickname :) 17:16:39 ok moving on 17:16:43 #topic Open Discussion 17:16:49 i need to bring somehting up here 17:16:56 this patch https://review.openstack.org/#/c/274154/ please comment away on it 17:17:02 its had some convterasy already 17:17:18 thats all for me (i think) 17:17:42 SamYaple 20 second review i am in favor of a rename 17:17:47 i'll dig more into it 17:17:54 ya that wfm 17:17:58 +1 17:18:03 rename is cool if we provide migration plan 17:18:03 well coment there :P 17:18:08 which shouldn't be hard 17:18:17 SamYaple let me properly review it buti will do so after this meeting 17:18:18 inc0: thats discussed in the review 17:18:18 just remove old and create new container 17:18:22 yup 17:18:32 idempotent remove FTW 17:18:44 cool, anyone have anything else? 17:18:50 this container doesn't do anything anyway, it just sits there after bootstraps and all 17:18:51 how we'll do that removal/creation? by some playbook or wat? 17:18:56 what* 17:19:05 nihilifer: yea in line playbook 17:19:08 its idempotent 17:19:11 nihilifer, upgrade play would be my guess 17:19:17 not a sperate playbook 17:19:24 k 17:19:35 upgrade will remove old container and add new, easy 17:19:42 deploy will just well...deploy 17:19:43 like sunday morning 17:19:49 should it be part of the patch then? 17:19:56 no 17:20:00 whatever 17:20:09 as long as it lands in the cycle 17:20:10 we dont have common upgrade playbook yet 17:20:16 but we will 17:20:23 well we kinda do 17:20:30 I mean we will 17:20:38 SamYaple you mean common between mesos and ansible? 17:20:41 and you might as well start;) 17:20:43 ok 17:20:48 sdake, common as common role 17:20:50 sdake: common role 17:20:54 got it 17:20:59 oh hey i do have another topic 17:21:00 kolla_ansible, rsyslog 17:21:03 ya 17:21:08 kolla-mesos and kolla-ansible and kolla overlap 17:21:12 nihilifer: ^ 17:21:18 fix upgrades 17:21:23 then we fix repos 17:21:44 yeah lets move repo shuffling for N please 17:21:57 yes, it's not that urgent for me as well 17:22:40 I'd like it for M, but upgrades have prio here 17:22:42 i am not opposed to moving the repos around for mitaka, it can be done we have the capacity, but i absolutely want upgrades done first before any repo movement takes place 17:23:06 i don't think anybody can disagree with that 17:23:33 thats no the overlap i was tlaking about specifically 17:23:36 but yes to all of that 17:23:46 i was talkign about the tool and python code overlap 17:23:51 mesos dups _alot_ of code 17:24:00 code that exists in the kolla (not kolla-ansible) repo 17:24:03 and that is fixed how SamYaple ? 17:24:09 import kolla 17:24:16 rather than being its own code 17:24:17 right, so repo shuffle? 17:24:22 no... 17:24:29 no need to split repo to do what i say 17:24:36 ok cool then 17:24:41 i'm good with that kind of change 17:24:48 as long as it doesn't slow down upgrades :) 17:24:50 nihilifer: need your thoughts on this 17:24:57 yeah, this is clean 17:25:00 nothing to do with upgrades sdake 17:25:12 i understand 17:25:32 I just want to be crystal clear on this point folks - if we don't deliver upgrades in mitaka, it won't be pretty for us 17:25:46 the community will lose trust in our ability to execute which right now the ythink we are rockstars 17:25:50 SamYaple: ok, i think we can try with importing kolla 17:25:54 the operators wont take us seriously 17:26:03 so upgrades, upgrades, upgrades 17:26:03 and in some fat future think about better way of sharing commons 17:26:08 nihilifer: cool no rush, i justdon't want to deverge to much 17:26:27 sdake, please do talk to your docker people, that's single biggest issue I have with upgrades now 17:26:32 well if kolla-mesos has a dep on kolla then this should work easy for sharing... no? 17:26:55 I'm ok if that will happen in N cycle for us, noone runs Mitaka from day one anyway 17:27:10 but we need to be able to upgrade docker without restarting qemu imho 17:27:37 SamYaple: not exactly - they may be problems with sharing commons between kolla-ansible and ansible just for deploying mesos 17:27:50 and in deep N development - for k8s probably 17:27:55 inc0 i got people working on that 17:27:59 but that's something to think about in N 17:28:03 inc0 but fwiw its out of our control 17:28:10 nihilifer: yea im not sure about that part, i was refering to scripts and common.py and such 17:28:33 yep, for strictly pythonic stuff, importing kolla may work 17:28:39 sdake, I know that it will be tricky, I moved some string on my end as well 17:28:42 2 minutes guys just an fyi 17:28:54 ok im out! see you in #kolla 17:28:56 but if we force people to restart vms for upgrade..that will be bad. 17:29:01 imho 17:29:18 inc0: +1 17:29:20 ok folks thanks for coming :) 17:29:20 I've been on too many ops summit upgrade sessions to say it won't 17:29:21 let's spill over to kolla 17:29:27 inc0 +0 atm :) 17:29:33 thank you everyone 17:29:36 thanks 17:29:36 :) 17:29:39 #endmeeting kolla