15:59:00 #startmeeting kolla 15:59:01 Meeting started Wed Mar 22 15:59:00 2017 UTC and is due to finish in 60 minutes. The chair is inc0. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:59:02 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:59:05 The meeting name has been set to 'kolla' 15:59:14 woot o/ 15:59:20 #topic rollcall 15:59:27 o/ woot 15:59:28 woot 15:59:32 o/ 15:59:32 o/ 15:59:37 o/ 15:59:38 welcome duonghq ;) 15:59:39 o/ 15:59:48 o/ 15:59:49 thank egonzalez 15:59:49 Hi all 15:59:53 o/ 15:59:56 o? 16:00:01 o/ 16:00:11 o/ 16:00:20 \o/ 16:00:49 o/ 16:01:03 w00t 16:01:24 o/ 16:01:33 egonzalez: you gotta wait for the official announcement :p 16:01:50 yup, so I'll just do it:P 16:01:55 #topic announcements 16:02:05 o/ 16:02:07 Welcome duonghq in our core team!:) 16:02:22 duonghq: welcome nice job 16:02:25 o/ 16:02:28 o/ 16:02:30 w00t w00t 16:02:40 nice duonghq 16:02:50 congrats duonghq 16:02:52 ;) 16:02:56 grats dude :) 16:02:57 duonghq: welcome:) 16:03:01 o/ 16:03:01 quick everyone take advantage and get your +2s in 16:03:05 >:) 16:03:14 mnaser: >:( 16:03:16 if only the gate wer eworkign :) 16:03:17 but congrats! :-P 16:03:26 thank inc0, pbourke, egonzalez, Jeffrey4l_ akwasnie sdake and all other core gave me support, I'll do my best for our deliverables 16:03:32 congrats duonghq! 16:03:44 any community announcements? 16:03:45 duonghq would like to see ya core on kolla-kubernetes at some point too :) 16:03:49 we have busy agenda today:) 16:03:53 sdake, sure 16:03:57 yup i have one 16:04:11 go on 16:04:15 since the ptgs were pretty disruptive to our kolla-kubernetes schedule, I moved 0.6.0 to 4-15-2016 16:04:19 (from 3-15-2017) 16:04:21 rather 4-15-2017 16:04:26 enjoy :) 16:04:49 ok moving on to agenda items 16:04:51 #topic unwedging the gate (Jeffrey4l, mnaser) 16:05:00 gentleman, you have the floor 16:05:14 right so: gate was broken by bifrost (for the second time) 16:05:30 drop bifrost from voting gates? 16:05:32 the first time we werent pointing to the right python exec and they started installing system packages which broke things 16:05:37 inc0 that was my suggestion 16:05:39 i have no idea for this issue right. 16:05:44 the second time (time), it seems to be related to ansible_env.SUDO_USER missing 16:06:06 the only gating that happens is the image build 16:06:15 i tried change "ansible_env.SUDO_USER to root. but another issue happends 16:06:16 i proposed and merged a patch which i thought would fix it (missing defaults): https://review.openstack.org/#/c/447713/ 16:06:22 we can *temporarily* unblock the gate by droping bifforst from it 16:06:27 'dict object' has no attribute 'bootstrap'" 16:06:35 sdake, yes. we can 16:07:11 Jeffrey4l_ interesting, i didnt try to fix that... i did some digging on monday, but tuesday and today have been busy in $job so yeah. 16:07:26 I think that's good discussion to have - having voting and non-voting build gates at same time 16:07:32 i will try to find more later. 16:07:34 voting with lower number of projects 16:08:05 on master we depends on lots of other project,which may break our jobs easily ;( 16:08:16 well during this whole thing i was thinking it would be really cool if we added checks for each project we build images for (and the job builds images for that specific project only) 16:08:42 if bifrost images were built on bifrost reviews, it could give them some indication of "hey you broke something". it could either be $project filing a bug saying "we changed this, please reflect your stuff accordingly" 16:08:50 or they can realize that they did break something somewhere 16:09:29 mnaser: issue is, we're bound to what is in tarballs.o.o today 16:09:29 but again, going back to our issue 16:09:30 have we had internal docker registry, I cannot recall, sorry. But mnaser's idea need this 16:09:31 mnaser, like devstack does ? that will be nice 16:09:31 that's one 16:09:54 but going back to the main subject 16:10:16 i think putting bifrost aside to stop the project from halting for now.. 16:10:19 i would be behind it 16:10:19 inc0, based on tarballs.o.o site. the project could build his new image ( bifrost ) + tarball.o.o images. 16:10:21 the reason is there is another issue apparently with python-mysqldb which i saw on launchpad 16:10:51 and honestly, the bifrost images seem to have a lot of, uh, hacky things. 16:11:14 yeah, thats why I'd be ok to drop it from voting 16:11:23 https://github.com/openstack/kolla/blob/master/docker/bifrost/bifrost-base/Dockerfile.j2#L29-L31 16:11:24 beyond this. base on inc0's idea, we can set a priority on building image in master branch. i.e. bifrost failed, that's OK and +1, nova failed, critical -1 16:11:28 as an example 16:11:29 and add non-voting gates with more images 16:11:59 i like that idea, 2 jobs 16:12:01 Jeffrey4l_, you mean we have mandatory and optional image inside our gate script? 16:12:01 inc0, we can re-use the current jobs. 16:12:11 *will have 16:12:14 Jeffrey4l_: yeah, and just change profile confs 16:12:16 one that builds core, one that builds $world 16:12:31 i agree with this 16:12:39 inc0, if add extra-jobs, we have to add centos/ubuntu/oracle ** binary/source = 6 jobs. 16:12:48 yeah 16:12:56 I don't think that's big issue 16:13:01 oraclelinux jobs take forever and it would be nice if we had mirrors :-( 16:13:14 pbourke: ^ told ya:P 16:13:26 ;/ 16:13:41 let's do this 16:13:45 also an interesting aspect is this might not be as bad because 16:13:46 they take ~10 mins more than centos from what Ive seen 16:13:53 1. drop bifrost as it blocked our dev 16:13:56 from voting 16:14:09 then how about push tarball registry? 16:14:16 if we build $core in one job, and $non-core in another job, we only duplicate the base image building, so we dont have that much "wasted" resources 16:14:17 2. create non-voting gates 16:14:24 Jeffrey4l_: good question 16:14:40 yeah that's a tricky one 16:14:48 and this should be only happen in master. 16:14:51 is there any way to "soft-fail" job? 16:15:02 we should keep all voting in stable branch. 16:15:21 inc0, i guess no. mnaser do you have any idea on "soft fail"? 16:15:31 i dont think there is such thing 16:15:34 like, we have some notion of fail, but won't block 16:15:35 its a pass or fail, voting or non voting 16:15:41 yeah I guess 16:15:48 inc0, we can define soft fail ;) 16:15:49 i think we handle this in our scripts 16:16:01 ok, my suggestion (modified) 16:16:03 just like what i said :) 16:16:08 let's drop bifrost from voting asap 16:16:12 and start ML thread 16:16:14 is soft-fail hard to tracking issue? 16:16:16 build images, if $core images fail, exit 1, if $core pass and $non-core fail, exit 0 but push what you got 16:16:17 it needs more discussion 16:16:20 great. 16:16:21 +1 16:16:40 mnaser, yes. i prefer to do this. 16:16:43 mnaser: can you publish patch with bifrost drop? 16:16:53 shooure 16:17:00 many thanks good sir 16:17:23 btw it's soft fail I was thinking of, what mnaser described 16:17:32 mnaser, drop it and please file a bug to track this ;) 16:17:47 can i use the same one i had for the gate failures? 16:17:56 only thing is we somehow need to have info that if soft-failed visible somewhere 16:18:02 mnaser: yeah 16:18:06 ya. i think it is OK. 16:18:16 https://bugs.launchpad.net/kolla/+bug/1674483 16:18:16 Launchpad bug 1674483 in kolla "Bifrost failing because of missing SUDO_USER" [Critical,Confirmed] 16:18:31 ok let's move on 16:18:35 inc0, ya. i'd like something soft-fail to support. 16:18:42 (sorry, busy agenda) 16:18:47 #topic Native helm config maps (sbezverk) 16:18:51 sbezverk: go ahead 16:21:09 ok, configmaps, there was a proposal to use ansible 16:21:31 #link https://review.openstack.org/#/c/448258/ 16:21:44 o/ 16:21:46 to clarify, I thin kthe proposal was to temporarily use ansible as we are now, but move it closer to kolla-kubernetes. 16:21:48 to generate configmaps, but imho it is not ideal. I would still like to explore 16:21:59 helm charts to generate them 16:22:05 sbezverk: +1 16:22:26 I thikn helm based configmaps are a better place to go long term. 16:22:29 sbezverk: issue with this is....helm charts are not ideal today and porting it all to helm will be a lot of work 16:22:37 based on what I see, helm has native way to support this specific functionality 16:22:42 short term, just moving the code from kolla-ansible -> kolla-kubernetes would reduce the breakage we see periodically. 16:22:43 without things like parent values 16:22:59 I do not see any reason notto use it, unless people know something that prevents 16:23:07 i don't know a lot about the k8s stuff, but i would love to have a shared lib/repo rather than us maintaining the same thing in two places 16:23:07 just a lot of work 16:23:28 mnaser: that was idea behind using kolla-ansible to generate configs for k8s 16:23:34 what I want to prevent, is yet a 3rd required workflow.... ansibe for generating some config, kollakube for some other ressources, and helm for yet more. 16:23:40 but it turns out that configs in fact differs 16:23:52 well, if I had right syntax, it would be a chart common to ALL services with only 6-8 lines of code 16:23:59 kfox1111: I'd really like to get rid of kollakube tbh 16:24:08 inc0: I fully agree. 16:24:19 +1 16:24:23 inc0: but I'm saying, it should go, before we add yet another kollakube. 16:24:28 or at the same time. 16:24:45 my plan with ansible is to repeal and replace kollakube 16:25:00 and it's gonna be great 16:25:01 it makes sense to have genconfig moved, we should not even try to implement it at least short term 16:25:01 100% repeal. not 50%, I'm saying. 16:25:15 kfox1111: yeah 16:25:20 but for configmap using helm seems logical to me.. 16:25:40 sbezverk: I agree with you. but one of the stated goals is to allow microservices to be configed by any mechanism. 16:25:48 I think we should provide a helm based config option. 16:25:53 sbezverk, i agree 100% 16:26:09 but if others want to perminanly support an ansible based one, I wouldn't block it. 16:26:19 my question tho - we need to explore how helm would deal with self-prepared configs 16:26:35 inc0: not sure it needs to? 16:26:37 I see 2 approaches here 16:26:47 kfox1111: sure thing 16:26:53 kfox1111: I'd say yes, there are companies that handcrafted their confs 16:26:54 user uploads the configmaps, the microservices consume them. 16:27:09 inc0: sorry, that was ambiguous. 16:27:20 I meant, I'm not sure anything has to change today to support hte use case. we already do. 16:27:39 kfox1111 +2 16:27:40 yeah, today we generate tons of configs and then just add them to k8s 16:27:53 we just mount whatever matches the needed name 16:27:59 that or just add your own 16:28:04 does not matter what you used to generate them 16:28:26 but if we move configmaps to charts, won't that be issue? 16:28:34 I mean...is configmap a microservice? 16:28:46 (I'd say that's a step too far tbh) 16:29:05 inc0: we can bundle configmap charts 16:29:12 into corresponding services 16:29:27 for example providing complete deployable package 16:29:28 but then won't helm re-create resources? 16:29:32 sbezverk, you mean configmap will be one of the chart template? 16:29:41 duonghq: yes 16:29:55 inc0: as dependency 16:30:06 inc0: no. the configmaps would be seperate charts. the user can launch them too, or not. 16:30:09 if it's inside chart, it won't be dependenct right? 16:30:10 inc0: I have not tried at least yet, but it should be doable 16:30:23 like, maybe we create helm/configmaps/xxxxx 16:30:26 how do we deal with node-specific config? or will we suffer that? 16:30:39 ehh 16:30:40 kfox1111: exactly what I think too 16:30:46 and then init container doing sed:/ 16:30:49 duonghq: group specific config could be handled by instancing. 16:31:04 I must say I don't like this. Chart-configmap is...overkill 16:31:04 inc0: you cannot completely get rid of sed, ever!! 16:31:12 helm install nova-compute-configmap --set element_name=foo 16:31:17 helm install nova-compute --set element_name=foo 16:31:47 sbezverk: because we need to put interface in configmap 16:31:51 inc0: there ae tons of run time only known parameters 16:32:01 and that's not something k8s supports very well 16:32:03 inc0: thats again, the lowest level. you can wrap them up in grouping charts if you don't care about the details. 16:32:06 inc0: there is not interface in kube 16:32:19 inc0: only IP 16:32:20 how about static pod provide such that value? 16:32:21 sbezverk: yeah, but ovs doesn't care about what is or not in kube 16:32:25 inc0: we handle that usually with init containers. it already works. 16:32:28 which are dynamically allocated 16:33:08 ok, so back to the topic at hand 16:33:10 so run time config modification is ineviatable 16:33:17 how we do configmaps in helm? I dunno yet 16:33:33 sbezverk: but sed isn't great tool to do it imho 16:33:36 my vote is /helm/configmaps/xxx charts. 16:33:47 inc0: the idea is to use common config chart 16:33:47 we can provide templates for the most common things we need to configure. 16:34:09 which will pull inside of a configmap all config files 16:34:16 generated by kolla genconfog 16:34:17 users can override settings via the cloud.yaml settings, or 16:34:21 upload their own configmaps. 16:34:29 one chart to have just configs? I don't like that tbh 16:34:40 I do not like put every thing in cloud.yaml, it'll be very huge file 16:34:46 sbezverk: your thinking one chart for all configmaps? 16:34:57 kfox1111: no 16:35:06 ok. 16:35:14 one chart per /etc/kolla/neutron-server 16:35:16 I think configmaps should be part of microservice chart 16:35:19 sbezverk: +1 16:35:23 one chart per /etc/kolla/neutron-openvswitch0agent 16:35:27 etc 16:35:29 duonghq: +1 16:35:33 inc0: what does that buy you? 16:35:56 better correlation between microservice and config it uses 16:36:08 inc0: that assumes you will be using that configmap. 16:36:12 we will effectively double number of microservices, which is significant today 16:36:22 inc0: so? 16:36:24 most people will 16:36:28 use base configmap 16:36:41 computekit-configmap 16:36:51 inc0: most people will use service or compute 16:36:55 can aggregate them all up so those that don't want to think about them, dont have to. 16:36:56 kit and we can bundle 16:37:04 configmaps there as dependencies 16:37:07 right. 16:37:08 yeah 16:37:17 sbezverk, +1 16:37:40 service/neutron can bundle the configmaps with helm conditionals. 16:37:43 but, will we have configmap @ microservice level or also service level? 16:37:51 I know we can do all that 16:37:52 embeded_configmaps=true 16:38:20 duonghq: the idea is to have configmap at the same level 16:38:24 as microsecices 16:38:38 and then bundle them on a needed basis 16:38:40 hey sorry was otp :) 16:38:49 sbezverk, wfm 16:38:58 ehh...idk, I think chart per every resource is a bit of unneeded churn 16:39:01 maybe not a bit 16:39:24 microservice, fine, but configmap? seems ugly to me 16:39:46 but that is just my personal opinion I didn't though through fully 16:39:52 inc0: folks like tripleo are now talking with us, becuase we're staying neutral to things like config generation. 16:39:57 think through* 16:39:59 I would suggest to try both approaches 16:40:09 both - no 16:40:10 to gain better undersatnding of pros and cons 16:40:16 ahh try 16:40:19 sorry misread you 16:40:23 following the aproach lets us continue to be unbiased on config management technologies. 16:40:31 inc0 why no, you said you wouldn't be married to any ansible implementation 16:40:34 just try, cause what I just said was theory ;) 16:40:38 yeah, third approach is to assume existing configmaps and provide tool to create them 16:40:52 sdake: we're not talking about ansible 16:41:03 ok 16:41:04 which is what we do today 16:41:11 just tools are ugly 16:41:17 what are the 2 approaches 16:41:20 yeah. 16:41:24 sorry - i could scorllback but it takes awhile 16:41:31 2 one liners will do 16:41:49 configmap as microservice on its own or part of existing microservice 16:41:57 (sorry guys but there are 4 more items and something id like to bring up in the open discussion and $time -- just a hint :x) 16:42:04 I concern about something like config merge and override as in kolla-ansible, not sure if we need it and how do we done it efficient 16:42:04 i think we want configmap as separate helm chart right? 16:42:22 yeah. seperate. 16:42:32 we don't know what we want yet 16:42:37 that seems a bit overkill? 16:42:38 inc0 i think we do ;) 16:42:39 I'm not too hot on separate chart 16:42:40 that is what we suggest to try 16:42:52 i say straw man it, and let the design shake itself out based on how many times you bump into the rough edges 16:43:00 lazyPower ++ 16:43:01 seems like conjecture otherwise 16:43:31 yeah, and in the meantime I'll just move ansible generation as we already have code 16:43:45 inc0 wfm - we need to eject kolla-ansible as adep soon - its annoying :) 16:43:46 inc0: we aren't talking about ansible ;) 16:43:50 99% of the work is getting hte configmaps into gotl. 16:43:58 1% is where in the fs they land. 16:44:07 yeah 16:44:13 I'm afraid of gotl:( 16:44:22 inc0: hehe. its... interesting. 16:44:26 one path forward is to focus on what goes in the gotpls 16:44:28 its very very similar to liquid templates 16:44:31 and then sort out where to put them through iteration 16:44:44 ( time, guys ) 16:44:45 kfox1111: atm we do not even need to do ANY templating 16:44:47 not exactly like it, but very close. so if you have some chops in jinja/liquid you should be on the right path. 16:44:51 been a while since I used a polish notation language before gotl. 16:45:04 we can take config files as is and dump them into corresponding configmap 16:45:13 sbezverk: we will need to do some I think. iscsi vs ceph, etc. 16:45:13 ok peeps - lots of pepeps complaining we should have timboxed this session can we move on :) 16:45:22 sbezverk: mostly minor things though. 16:45:33 #topic Deployment guide 16:45:49 kfox1111: but know we do not do it at all, so first step would be to match what we do now 16:45:50 deployment guide has lots o -1s 16:45:57 thansk for the reviews folks 16:46:06 I'm going to give it a spa treatment _today_ 16:46:12 sbezverk: we dont' do it because genconfig does it. but if genconfig is getting repleaced by helm, then we need to do it. 16:46:17 woudl appreciate people that are interested add themselves as a reviewer 16:46:25 I'm not sure about copying the etherpad directly into a review like that? 16:46:29 to fllow progress 16:46:37 I found it a little confusing.. 16:46:46 Because there are embedded comments. 16:46:47 rwellum it wont be after i update it 16:46:59 sure 16:46:59 rwellum i agree its confusing now 16:47:09 update is in progress on my disk now 16:47:20 there are 40 comments in gerrit - so takes awhiel to get thorugh 16:48:03 https://review.openstack.org/#/c/447731/ please add yourself as a reviewer if you want to follow the work or have contributions to make 16:48:06 thanks inc0 16:48:18 ok, so moving on?:) 16:48:29 yup 16:48:37 sdake: practice showed that the most efficient approach is to get together and go step by step, people can follow in their environment and share comments in the real time.. 16:48:38 #topic KS rolling upgrade review 16:48:42 inc0: sdake: patch need reviews of cores https://review.openstack.org/#/c/425446/ depend on mentioned patch of kolla 16:48:44 spsurya_: floor is yours 16:48:54 sbezverk wfm - we can do that after i get the doc cleaned up 16:49:04 sbezverk so we have some sanitary starting point 16:49:06 inc0: need core review 16:49:19 https://review.openstack.org/#/c/425446/ 16:49:42 that is all from my for today 16:49:49 my side* 16:50:04 ok:) 16:50:08 should we rebase and try to get all gate green before move on for this ps? 16:50:18 only ubuntu-binary failed 16:50:22 ok, I'll bump post-ptg bps to next week 16:50:33 duonghq yo ucan just recheck - no need to rebase 16:50:40 duonghq: seems like that 16:50:55 #topic open discussion 16:50:58 duonghq recheck automatically does a rebase - tested and confirmed by many people 16:50:59 https://review.openstack.org/#/c/447524/ 16:51:04 dockerhub publishers in infra 16:51:15 i got some feedback from infra team, i wanted to ask some folks at kolla to give some comments/feedback 16:51:15 mnaser ++ thanks for kicking that off :) 16:51:21 inc0: open discussion - is this where i can bring up a topic from last week that got postponed? 16:51:24 sdake, I'll try, last time I do recheck, it didn't rebase, I'm not sure 16:51:37 so i can make another revision to the review to throw towards infra folks 16:51:39 i will release next z stream for newton and mitaka branch. any advice and idea? 16:51:39 duonghq i'll talk to after meeting on this topic ok? 16:51:45 lazyPower: yeah 16:51:48 (thats all i had to say, just wanted to put peoples eyes on it) 16:51:54 sdake, wfm, thanks 16:52:08 inc0, ^ 16:52:11 could folks in kolla interested in the dockerhub pushing add yourself to that review mnaser linked? 16:52:12 TIA :) 16:52:25 Jeffrey4l_: hold on a sec 16:52:43 kfox1111: lazyPower have different way to deploy k8s on ubuntu right?:) 16:52:45 I'd like to re-propose the topic of adding the Canonical Distribution of Kubernetes to the Kolla CI system. We'll be happy to run it, do the integration path, and submit results so you get warm fuzzies knowing kolla will always work on top of the CDK which is the officially supported method to deploying k8s on ubuntu. 16:53:02 yeah, sorry was typing out that short novel ^ 16:53:13 lazyPower are you the fella we met at the PTG from cononical? 16:53:16 inc0: ubuntu has a product now for k8s. 16:53:18 sdake: correct 16:53:28 lazyPower nice to see ya onnline and in the community :) 16:53:42 sdake: well one of about 3 engineers that were in/out of the room anyway :) but yeah i was the primary driver of the talks for CI integration 16:53:51 lazyPower: yeah, that would be cool. 16:53:54 lazyPower we have typically stayed away from third party gating, although this is because people misunderstand that at some point it becomes voting 16:53:54 Thanks sdake, glad to be here 16:53:55 mancdaz, will review the specs. thanks for the work. 16:53:56 Jeffrey4l_: we should release z streams for ocata at the same time 16:54:10 lazyPower so as long as you understand (infra) wont allow it to be voting, i think that is a great idea 16:54:48 lazyPower: what we need is k8s and ceph deployed, can you provide this part? 16:54:59 yes. but i am thinking leave another week for ocata branch. 16:55:07 sdake: this is contrary information from what iw as told, which is not entirely a deal breaker but this means i have to take this new info back to management so they can weigh in. but i dont think this will be a hug eissue 16:55:17 will release 4.0.1 next week. 16:55:21 inc0: we can, we already deploy k8s + have relations to ceph for RBD based PV enlistment 16:55:25 lazyPower openstack infrstruture just wont allow third party gates to vote 16:55:33 lazyPower its not something this team has any control over 16:55:50 sdake: thats understandable. I came back from the PTG with the understanding that once integrated we could be a voting gate 16:55:54 lazyPower by vote I mean block a review 16:55:55 lazyPower: alternatively to 3rd party non voting we can just write it in infra 16:55:59 whichi means i got bad info, and i'm going to have to ammend that 16:56:00 as regular infra gates 16:56:17 lazyPower join me in openstack-infra after, and we can confirm with the team 16:56:21 sure thing 16:56:23 I understand it's all opensource 16:56:29 lazyPower right - not sure who you got th ebad info from, this is why we hav eavoided third party gates in the past :) 16:56:43 Thats basically all I had, was to get the ball rolling here, and see what follow up steps we needed to take to even get started. 16:57:03 lazyPower followup step is communicating with openstack-infra about how to do third party gating 16:57:20 lazyPower asuming your fine with a nonvoting gate 16:57:26 lazyPower: thanks for working on it. more testing on things users really care about is always a good thing. :) 16:57:56 lazyPower happy to facilitate the conversation if anyone is around in openstack-infra in about 1 hr :) 16:58:09 lazyPower or any of the cores can do it now if they are available 16:58:15 I'll join you in a second there 16:58:27 inc0 can handle it then, i've got a meeting in 2 minutes :) 16:58:30 anyway, we're reacing the end times 16:58:42 ok, in 1 hr then 16:58:43 END OF DAYS -- DADA 16:58:57 inc0 infra knows you as well as me, you can handle it :) 16:58:59 thank you all for coming 16:59:02 kk 16:59:03 thanks everyone 16:59:08 #endmeeting kolla