15:59:00 <inc0> #startmeeting kolla
15:59:01 <openstack> Meeting started Wed Mar 22 15:59:00 2017 UTC and is due to finish in 60 minutes.  The chair is inc0. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:59:02 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:59:05 <openstack> The meeting name has been set to 'kolla'
15:59:14 <egonzalez> woot o/
15:59:20 <inc0> #topic rollcall
15:59:27 <duonghq> o/ woot
15:59:28 <jascott1> woot
15:59:32 <sdake> o/
15:59:32 <kfox1111> o/
15:59:37 <mnaser> o/
15:59:38 <egonzalez> welcome duonghq ;)
15:59:39 <rwellum> o/
15:59:48 <asettle> o/
15:59:49 <duonghq> thank egonzalez
15:59:49 <blallau> Hi all
15:59:53 <spsurya_> o/
15:59:56 <sbezverk> o?
16:00:01 <sbezverk> o/
16:00:11 <zhubingbing> o/
16:00:20 <Jeffrey4l_> \o/
16:00:49 <akwasnie> o/
16:01:03 <pbourke> w00t
16:01:24 <mandre> o/
16:01:33 <pbourke> egonzalez: you gotta wait for the official announcement :p
16:01:50 <inc0> yup, so I'll just do it:P
16:01:55 <inc0> #topic announcements
16:02:05 <lazyPower> o/
16:02:07 <inc0> Welcome duonghq in our core team!:)
16:02:22 <pbourke> duonghq: welcome nice job
16:02:25 <vhosakot> o/
16:02:28 <kfox1111> o/
16:02:30 <vhosakot> w00t w00t
16:02:40 <spsurya_> nice duonghq
16:02:50 <Jeffrey4l_> congrats duonghq
16:02:52 <zhubingbing> ;)
16:02:56 <sdake> grats dude :)
16:02:57 <akwasnie> duonghq: welcome:)
16:03:01 <bogdando> o/
16:03:01 <mnaser> quick everyone take advantage and get your +2s in
16:03:05 <mnaser> >:)
16:03:14 <inc0> mnaser: >:(
16:03:16 <sdake> if only the gate wer eworkign :)
16:03:17 <mnaser> but congrats! :-P
16:03:26 <duonghq> thank inc0, pbourke, egonzalez, Jeffrey4l_ akwasnie sdake  and all other core gave me support, I'll do my best for our deliverables
16:03:32 <vhosakot> congrats duonghq!
16:03:44 <inc0> any community announcements?
16:03:45 <sdake> duonghq would like to see ya core on kolla-kubernetes at some point too :)
16:03:49 <inc0> we have busy agenda today:)
16:03:53 <duonghq> sdake, sure
16:03:57 <sdake> yup i have one
16:04:11 <inc0> go on
16:04:15 <sdake> since the ptgs were pretty disruptive to our kolla-kubernetes schedule, I moved 0.6.0 to 4-15-2016
16:04:19 <sdake> (from 3-15-2017)
16:04:21 <sdake> rather 4-15-2017
16:04:26 <sdake> enjoy :)
16:04:49 <inc0> ok moving on to agenda items
16:04:51 <inc0> #topic unwedging the gate (Jeffrey4l, mnaser)
16:05:00 <inc0> gentleman, you have the floor
16:05:14 <mnaser> right so: gate was broken by bifrost (for the second time)
16:05:30 <inc0> drop bifrost from voting gates?
16:05:32 <mnaser> the first time we werent pointing to the right python exec and they started installing system packages which broke things
16:05:37 <sdake> inc0 that was my suggestion
16:05:39 <Jeffrey4l_> i have no idea for this issue right.
16:05:44 <mnaser> the second time (time), it seems to be related to ansible_env.SUDO_USER missing
16:06:06 <sdake> the only gating that happens is the image build
16:06:15 <Jeffrey4l_> i tried change "ansible_env.SUDO_USER to root. but another issue happends
16:06:16 <mnaser> i proposed and merged a patch which i thought would fix it (missing defaults): https://review.openstack.org/#/c/447713/
16:06:22 <sdake> we can *temporarily* unblock the gate by droping bifforst from it
16:06:27 <Jeffrey4l_> 'dict object' has no attribute 'bootstrap'"
16:06:35 <Jeffrey4l_> sdake, yes. we can
16:07:11 <mnaser> Jeffrey4l_ interesting, i didnt try to fix that... i did some digging on monday, but tuesday and today have been busy in $job so yeah.
16:07:26 <inc0> I think that's good discussion to have - having voting and non-voting build gates at same time
16:07:32 <Jeffrey4l_> i will try to find more later.
16:07:34 <inc0> voting with lower number of projects
16:08:05 <Jeffrey4l_> on master we depends on lots of other project,which may break our jobs easily ;(
16:08:16 <mnaser> well during this whole thing i was thinking it would be really cool if we added checks for each project we build images for (and the job builds images for that specific project only)
16:08:42 <mnaser> if bifrost images were built on bifrost reviews, it could give them some indication of "hey you broke something". it could either be $project filing a bug saying "we changed this, please reflect your stuff accordingly"
16:08:50 <mnaser> or they can realize that they did break something somewhere
16:09:29 <inc0> mnaser: issue is, we're bound to what is in tarballs.o.o today
16:09:29 <mnaser> but again, going back to our issue
16:09:30 <duonghq> have we had internal docker registry, I cannot recall, sorry. But mnaser's idea need this
16:09:31 <Jeffrey4l_> mnaser, like devstack does ? that will be nice
16:09:31 <inc0> that's one
16:09:54 <mnaser> but going back to the main subject
16:10:16 <mnaser> i think putting bifrost aside to stop the project from halting for now..
16:10:19 <mnaser> i would be behind it
16:10:19 <Jeffrey4l_> inc0, based on tarballs.o.o site. the project could build his new image ( bifrost ) + tarball.o.o images.
16:10:21 <mnaser> the reason is there is another issue apparently with python-mysqldb which i saw on launchpad
16:10:51 <mnaser> and honestly, the bifrost images seem to have a lot of, uh, hacky things.
16:11:14 <inc0> yeah, thats why I'd be ok to drop it from voting
16:11:23 <mnaser> https://github.com/openstack/kolla/blob/master/docker/bifrost/bifrost-base/Dockerfile.j2#L29-L31
16:11:24 <Jeffrey4l_> beyond this. base on inc0's idea, we can set a priority on building image in master branch. i.e. bifrost failed, that's OK and +1,   nova failed, critical -1
16:11:28 <mnaser> as an example
16:11:29 <inc0> and add non-voting gates with more images
16:11:59 <mnaser> i like that idea, 2 jobs
16:12:01 <duonghq> Jeffrey4l_, you mean we have mandatory and optional image inside our gate script?
16:12:01 <Jeffrey4l_> inc0, we can re-use the current jobs.
16:12:11 <duonghq> *will have
16:12:14 <inc0> Jeffrey4l_: yeah, and just change profile confs
16:12:16 <mnaser> one that builds core, one that builds $world
16:12:31 <mnaser> i agree with this
16:12:39 <Jeffrey4l_> inc0, if add extra-jobs, we have to add centos/ubuntu/oracle ** binary/source  = 6 jobs.
16:12:48 <inc0> yeah
16:12:56 <inc0> I don't think that's big issue
16:13:01 <mnaser> oraclelinux jobs take forever and it would be nice if we had mirrors :-(
16:13:14 <inc0> pbourke: ^ told ya:P
16:13:26 <Jeffrey4l_> ;/
16:13:41 <inc0> let's do this
16:13:45 <mnaser> also an interesting aspect is this might not be as bad because
16:13:46 <pbourke> they take ~10 mins more than centos from what Ive seen
16:13:53 <inc0> 1. drop bifrost as it blocked our dev
16:13:56 <inc0> from voting
16:14:09 <Jeffrey4l_> then how about push tarball registry?
16:14:16 <mnaser> if we build $core in one job, and $non-core in another job, we only duplicate the base image building, so we dont have that much "wasted" resources
16:14:17 <inc0> 2. create non-voting gates
16:14:24 <inc0> Jeffrey4l_: good question
16:14:40 <mnaser> yeah that's a tricky one
16:14:48 <Jeffrey4l_> and this should be only happen in master.
16:14:51 <inc0> is there any way to "soft-fail" job?
16:15:02 <Jeffrey4l_> we should keep all voting in stable branch.
16:15:21 <Jeffrey4l_> inc0, i guess no. mnaser do you have any idea on "soft fail"?
16:15:31 <mnaser> i dont think there is such thing
16:15:34 <inc0> like, we have some notion of fail, but won't block
16:15:35 <mnaser> its a pass or fail, voting or non voting
16:15:41 <inc0> yeah I guess
16:15:48 <Jeffrey4l_> inc0, we can define soft fail ;)
16:15:49 <mnaser> i think we handle this in our scripts
16:16:01 <inc0> ok, my suggestion (modified)
16:16:03 <Jeffrey4l_> just like what i said :)
16:16:08 <inc0> let's drop bifrost from voting asap
16:16:12 <inc0> and start ML thread
16:16:14 <duonghq> is soft-fail hard to tracking issue?
16:16:16 <mnaser> build images, if $core images fail, exit 1, if $core pass and $non-core fail, exit 0 but push what you got
16:16:17 <inc0> it needs more discussion
16:16:20 <Jeffrey4l_> great.
16:16:21 <mnaser> +1
16:16:40 <Jeffrey4l_> mnaser, yes. i prefer to do this.
16:16:43 <inc0> mnaser: can you publish patch with bifrost drop?
16:16:53 <mnaser> shooure
16:17:00 <inc0> many thanks good sir
16:17:23 <inc0> btw it's soft fail I was thinking of, what mnaser described
16:17:32 <Jeffrey4l_> mnaser, drop it and please file a bug to track this ;)
16:17:47 <mnaser> can i use the same one i had for the gate failures?
16:17:56 <inc0> only thing is we somehow need to have info that if soft-failed visible somewhere
16:18:02 <inc0> mnaser: yeah
16:18:06 <Jeffrey4l_> ya. i think it is OK.
16:18:16 <mnaser> https://bugs.launchpad.net/kolla/+bug/1674483
16:18:16 <openstack> Launchpad bug 1674483 in kolla "Bifrost failing because of missing SUDO_USER" [Critical,Confirmed]
16:18:31 <inc0> ok let's move on
16:18:35 <Jeffrey4l_> inc0, ya. i'd like something soft-fail to support.
16:18:42 <inc0> (sorry, busy agenda)
16:18:47 <inc0> #topic Native helm config maps (sbezverk)
16:18:51 <inc0> sbezverk: go ahead
16:21:09 <sbezverk> ok, configmaps, there was a proposal to use ansible
16:21:31 <inc0> #link https://review.openstack.org/#/c/448258/
16:21:44 <berendt> o/
16:21:46 <kfox1111> to clarify, I thin kthe proposal was to temporarily use ansible as we are now, but move it closer to kolla-kubernetes.
16:21:48 <sbezverk> to generate configmaps, but imho it is not ideal.  I would still like to explore
16:21:59 <sbezverk> helm charts to generate them
16:22:05 <kfox1111> sbezverk: +1
16:22:26 <kfox1111> I thikn helm based configmaps are a better place to go long term.
16:22:29 <inc0> sbezverk: issue with this is....helm charts are not ideal today and porting it all to helm will be a lot of work
16:22:37 <sbezverk> based on what I see, helm has native way to support this specific functionality
16:22:42 <kfox1111> short term, just moving the code from kolla-ansible -> kolla-kubernetes would reduce the breakage we see periodically.
16:22:43 <inc0> without things like parent values
16:22:59 <sbezverk> I do not see any reason notto use it, unless people know something that prevents
16:23:07 <mnaser> i don't know a lot about the k8s stuff, but i would love to have a shared lib/repo rather than us maintaining the same thing in two places
16:23:07 <inc0> just a lot of work
16:23:28 <inc0> mnaser: that was idea behind using kolla-ansible to generate configs for k8s
16:23:34 <kfox1111> what I want to prevent, is yet a 3rd required workflow.... ansibe for generating some config, kollakube for some other ressources, and helm for yet more.
16:23:40 <inc0> but it turns out that configs in fact differs
16:23:52 <sbezverk> well, if I had right syntax, it would be a chart common to ALL services with only 6-8 lines of code
16:23:59 <inc0> kfox1111: I'd really like to get rid of kollakube tbh
16:24:08 <kfox1111> inc0: I fully agree.
16:24:19 <sbezverk> +1
16:24:23 <kfox1111> inc0: but I'm saying, it should go, before we add yet another kollakube.
16:24:28 <kfox1111> or at the same time.
16:24:45 <inc0> my plan with ansible is to repeal and replace kollakube
16:25:00 <inc0> and it's gonna be great
16:25:01 <sbezverk> it makes sense to have genconfig moved, we should not even try to implement it at least short term
16:25:01 <kfox1111> 100% repeal. not 50%, I'm saying.
16:25:15 <inc0> kfox1111: yeah
16:25:20 <sbezverk> but for configmap using helm seems logical to me..
16:25:40 <kfox1111> sbezverk: I agree with you. but one of the stated goals is to allow microservices to be configed by any mechanism.
16:25:48 <kfox1111> I think we should provide a helm based config option.
16:25:53 <srwilkers> sbezverk, i agree 100%
16:26:09 <kfox1111> but if others want to perminanly support an ansible based one, I wouldn't block it.
16:26:19 <inc0> my question tho - we need to explore how helm would deal with self-prepared configs
16:26:35 <kfox1111> inc0: not sure it needs to?
16:26:37 <inc0> I see 2 approaches here
16:26:47 <sbezverk> kfox1111: sure thing
16:26:53 <inc0> kfox1111: I'd say yes, there are companies that handcrafted their confs
16:26:54 <kfox1111> user uploads the configmaps, the microservices consume them.
16:27:09 <kfox1111> inc0: sorry, that was ambiguous.
16:27:20 <kfox1111> I meant, I'm not sure anything has to change today to support hte use case. we already do.
16:27:39 <sbezverk> kfox1111 +2
16:27:40 <inc0> yeah, today we generate tons of configs and then just add them to k8s
16:27:53 <sbezverk> we just mount whatever matches the needed name
16:27:59 <inc0> that or just add your own
16:28:04 <sbezverk> does not matter what you used to generate them
16:28:26 <inc0> but if we move configmaps to charts, won't that be issue?
16:28:34 <inc0> I mean...is configmap a microservice?
16:28:46 <inc0> (I'd say that's a step too far tbh)
16:29:05 <sbezverk> inc0: we can bundle configmap charts
16:29:12 <sbezverk> into corresponding services
16:29:27 <sbezverk> for example providing complete deployable package
16:29:28 <inc0> but then won't helm re-create resources?
16:29:32 <duonghq> sbezverk, you mean configmap will be one of the chart template?
16:29:41 <sbezverk> duonghq: yes
16:29:55 <sbezverk> inc0: as dependency
16:30:06 <kfox1111> inc0: no. the configmaps would be seperate charts. the user can launch them too, or not.
16:30:09 <inc0> if it's inside chart, it won't be dependenct right?
16:30:10 <sbezverk> inc0: I have not tried at least yet, but it should be doable
16:30:23 <kfox1111> like, maybe we create helm/configmaps/xxxxx
16:30:26 <duonghq> how do we deal with node-specific config? or will we suffer that?
16:30:39 <inc0> ehh
16:30:40 <sbezverk> kfox1111: exactly what I think too
16:30:46 <inc0> and then init container doing sed:/
16:30:49 <kfox1111> duonghq: group specific config could be handled by instancing.
16:31:04 <inc0> I must say I don't like this. Chart-configmap is...overkill
16:31:04 <sbezverk> inc0: you cannot completely get rid of sed, ever!!
16:31:12 <kfox1111> helm install nova-compute-configmap --set element_name=foo
16:31:17 <kfox1111> helm install nova-compute --set element_name=foo
16:31:47 <inc0> sbezverk: because we need to put interface in configmap
16:31:51 <sbezverk> inc0: there ae tons of run time only known parameters
16:32:01 <inc0> and that's not something k8s supports very well
16:32:03 <kfox1111> inc0: thats again, the lowest level. you can wrap them up in grouping charts if you don't care about the details.
16:32:06 <sbezverk> inc0: there is not interface in kube
16:32:19 <sbezverk> inc0: only IP
16:32:20 <duonghq> how about static pod provide such that value?
16:32:21 <inc0> sbezverk: yeah, but ovs doesn't care about what is or not in kube
16:32:25 <kfox1111> inc0: we handle that usually with init containers. it already works.
16:32:28 <sbezverk> which are dynamically allocated
16:33:08 <inc0> ok, so back to the topic at hand
16:33:10 <sbezverk> so run time config modification is ineviatable
16:33:17 <inc0> how we do configmaps in helm? I dunno yet
16:33:33 <inc0> sbezverk: but sed isn't great tool to do it imho
16:33:36 <kfox1111> my vote is /helm/configmaps/xxx charts.
16:33:47 <sbezverk> inc0: the idea is to use common config chart
16:33:47 <kfox1111> we can provide templates for the most common things we need to configure.
16:34:09 <sbezverk> which will pull inside of a configmap all config files
16:34:16 <sbezverk> generated by kolla genconfog
16:34:17 <kfox1111> users can override settings via the cloud.yaml settings, or
16:34:21 <kfox1111> upload their own configmaps.
16:34:29 <inc0> one chart to have just configs? I don't like that tbh
16:34:40 <duonghq> I do not like put every thing in cloud.yaml, it'll be very huge file
16:34:46 <kfox1111> sbezverk: your thinking one chart for all configmaps?
16:34:57 <sbezverk> kfox1111: no
16:35:06 <kfox1111> ok.
16:35:14 <sbezverk> one chart per /etc/kolla/neutron-server
16:35:16 <inc0> I think configmaps should be part of microservice chart
16:35:19 <kfox1111> sbezverk: +1
16:35:23 <sbezverk> one chart per /etc/kolla/neutron-openvswitch0agent
16:35:27 <sbezverk> etc
16:35:29 <spsurya_> duonghq:  +1
16:35:33 <kfox1111> inc0: what does that buy you?
16:35:56 <inc0> better correlation between microservice and config it uses
16:36:08 <kfox1111> inc0: that assumes you will be using that configmap.
16:36:12 <inc0> we will effectively double number of microservices, which is significant today
16:36:22 <kfox1111> inc0: so?
16:36:24 <inc0> most people will
16:36:28 <inc0> use base configmap
16:36:41 <kfox1111> computekit-configmap
16:36:51 <sbezverk> inc0: most people will use service or compute
16:36:55 <kfox1111> can aggregate them all up so those that don't want to think about them, dont have to.
16:36:56 <sbezverk> kit and we can bundle
16:37:04 <sbezverk> configmaps there as dependencies
16:37:07 <kfox1111> right.
16:37:08 <kfox1111> yeah
16:37:17 <duonghq> sbezverk, +1
16:37:40 <kfox1111> service/neutron can bundle the configmaps with helm conditionals.
16:37:43 <duonghq> but, will we have configmap @ microservice level or also service level?
16:37:51 <inc0> I know we can do all that
16:37:52 <kfox1111> embeded_configmaps=true
16:38:20 <sbezverk> duonghq: the idea is to have configmap at the same level
16:38:24 <sbezverk> as microsecices
16:38:38 <sbezverk> and then bundle them on a needed basis
16:38:40 <sdake> hey sorry was otp :)
16:38:49 <duonghq> sbezverk, wfm
16:38:58 <inc0> ehh...idk, I think chart per every resource is a bit of unneeded churn
16:39:01 <inc0> maybe not a bit
16:39:24 <inc0> microservice, fine, but configmap? seems ugly to me
16:39:46 <inc0> but that is just my personal opinion I didn't though through fully
16:39:52 <kfox1111> inc0: folks like tripleo are now talking with us, becuase we're staying neutral to things like config generation.
16:39:57 <inc0> think through*
16:39:59 <sbezverk> I would suggest to try both approaches
16:40:09 <inc0> both - no
16:40:10 <sbezverk> to gain better undersatnding of pros and cons
16:40:16 <inc0> ahh try
16:40:19 <inc0> sorry misread you
16:40:23 <kfox1111> following the aproach lets us continue to be unbiased on config management technologies.
16:40:31 <sdake> inc0 why no, you said you wouldn't be married to any ansible implementation
16:40:34 <sbezverk> just try, cause what I just said was theory ;)
16:40:38 <inc0> yeah, third approach is to assume existing configmaps and provide tool to create them
16:40:52 <inc0> sdake: we're not talking about ansible
16:41:03 <sdake> ok
16:41:04 <inc0> which is what we do today
16:41:11 <inc0> just tools are ugly
16:41:17 <sdake> what are the 2 approaches
16:41:20 <kfox1111> yeah.
16:41:24 <sdake> sorry - i could scorllback but it takes awhile
16:41:31 <sdake> 2 one liners will do
16:41:49 <inc0> configmap as microservice on its own or part of existing microservice
16:41:57 <mnaser> (sorry guys but there are 4 more items and something id like to bring up in the open discussion and $time -- just a hint :x)
16:42:04 <duonghq> I concern about something like config merge and override as in kolla-ansible, not sure if we need it and how do we done it efficient
16:42:04 <sdake> i think we want configmap as separate helm chart right?
16:42:22 <kfox1111> yeah. seperate.
16:42:32 <inc0> we don't know what we want yet
16:42:37 <portdirect> that seems a bit overkill?
16:42:38 <sdake> inc0 i think we do ;)
16:42:39 <inc0> I'm not too hot on separate chart
16:42:40 <sbezverk> that is what we suggest to try
16:42:52 <lazyPower> i say straw man it, and let the design shake itself out based on how many times you bump into the rough edges
16:43:00 <sdake> lazyPower ++
16:43:01 <lazyPower> seems like conjecture otherwise
16:43:31 <inc0> yeah, and in the meantime I'll just move ansible generation as we already have code
16:43:45 <sdake> inc0 wfm - we need to eject kolla-ansible as adep soon - its annoying :)
16:43:46 <lazyPower> inc0: we aren't talking about ansible ;)
16:43:50 <kfox1111> 99% of the work is getting hte configmaps into gotl.
16:43:58 <kfox1111> 1% is where in the fs they land.
16:44:07 <inc0> yeah
16:44:13 <inc0> I'm afraid of gotl:(
16:44:22 <kfox1111> inc0: hehe. its... interesting.
16:44:26 <sdake> one path forward is to focus on what goes in the gotpls
16:44:28 <lazyPower> its very very similar to liquid templates
16:44:31 <sdake> and then sort out where to put them through iteration
16:44:44 <Jeffrey4l_> ( time, guys )
16:44:45 <sbezverk> kfox1111: atm we do not even need to do ANY templating
16:44:47 <lazyPower> not exactly like it, but very close. so if you have some chops in jinja/liquid you should be on the right path.
16:44:51 <kfox1111> been a while since I used a polish notation language before gotl.
16:45:04 <sbezverk> we can take config files as is and dump them into corresponding configmap
16:45:13 <kfox1111> sbezverk: we will need to do some I think. iscsi vs ceph, etc.
16:45:13 <sdake> ok peeps - lots of pepeps complaining we should have timboxed this session can we move on :)
16:45:22 <kfox1111> sbezverk: mostly minor things though.
16:45:33 <inc0> #topic Deployment guide
16:45:49 <sbezverk> kfox1111: but know we do not do it at all, so first step would be to match what we do now
16:45:50 <sdake> deployment guide has lots o -1s
16:45:57 <sdake> thansk for the reviews folks
16:46:06 <sdake> I'm going to give it a spa treatment _today_
16:46:12 <kfox1111> sbezverk: we dont' do it because genconfig does it. but if genconfig is getting repleaced by helm, then we need to do it.
16:46:17 <sdake> woudl appreciate people that are interested add themselves as a reviewer
16:46:25 <rwellum> I'm not sure about copying the etherpad directly into a review like that?
16:46:29 <sdake> to fllow progress
16:46:37 <rwellum> I found it a little confusing..
16:46:46 <rwellum> Because there are embedded comments.
16:46:47 <sdake> rwellum it wont be after i update it
16:46:59 <rwellum> sure
16:46:59 <sdake> rwellum i agree its confusing now
16:47:09 <sdake> update is in progress on my disk now
16:47:20 <sdake> there are 40 comments in gerrit - so takes awhiel to get thorugh
16:48:03 <sdake> https://review.openstack.org/#/c/447731/ please add yourself as a reviewer if you want to follow the work or have contributions to make
16:48:06 <sdake> thanks inc0
16:48:18 <inc0> ok, so moving on?:)
16:48:29 <sdake> yup
16:48:37 <sbezverk> sdake: practice showed that the most efficient approach is to get together and go step by step, people can follow in their environment and share comments in the real time..
16:48:38 <inc0> #topic KS rolling upgrade review
16:48:42 <spsurya_> inc0:  sdake: patch need reviews of cores   https://review.openstack.org/#/c/425446/    depend on mentioned patch of kolla
16:48:44 <inc0> spsurya_: floor is yours
16:48:54 <sdake> sbezverk wfm - we can do that after i get the doc cleaned up
16:49:04 <sdake> sbezverk so we have some sanitary starting point
16:49:06 <spsurya_> inc0: need core review
16:49:19 <spsurya_> https://review.openstack.org/#/c/425446/
16:49:42 <spsurya_> that is all from my for today
16:49:49 <spsurya_> my side*
16:50:04 <inc0> ok:)
16:50:08 <duonghq> should we rebase and try to get all gate green before move on for this ps?
16:50:18 <duonghq> only ubuntu-binary failed
16:50:22 <inc0> ok, I'll bump post-ptg bps to next week
16:50:33 <sdake> duonghq yo ucan just recheck - no need to rebase
16:50:40 <spsurya_> duonghq: seems like that
16:50:55 <inc0> #topic open discussion
16:50:58 <sdake> duonghq recheck automatically does a rebase - tested and confirmed by many people
16:50:59 <mnaser> https://review.openstack.org/#/c/447524/
16:51:04 <mnaser> dockerhub publishers in infra
16:51:15 <mnaser> i got some feedback from infra team, i wanted to ask some folks at kolla to give some comments/feedback
16:51:15 <sdake> mnaser ++ thanks for kicking that off :)
16:51:21 <lazyPower> inc0: open discussion - is this where i can bring up a topic from last week that got postponed?
16:51:24 <duonghq> sdake, I'll try, last time I do recheck, it didn't rebase, I'm not sure
16:51:37 <mnaser> so i can make another revision to the review to throw towards infra folks
16:51:39 <Jeffrey4l_> i will release next z stream for newton and mitaka branch. any advice and idea?
16:51:39 <sdake> duonghq i'll talk to after meeting on this topic ok?
16:51:45 <inc0> lazyPower: yeah
16:51:48 <mnaser> (thats all i had to say, just wanted to put peoples eyes on it)
16:51:54 <duonghq> sdake, wfm, thanks
16:52:08 <Jeffrey4l_> inc0, ^
16:52:11 <sdake> could folks in kolla interested in the dockerhub pushing add yourself to that review mnaser linked?
16:52:12 <sdake> TIA :)
16:52:25 <inc0> Jeffrey4l_: hold on a sec
16:52:43 <inc0> kfox1111: lazyPower have different way to deploy k8s on ubuntu right?:)
16:52:45 <lazyPower> I'd like to re-propose the topic of adding the Canonical Distribution of Kubernetes to the Kolla CI system. We'll be happy to run it, do the integration path, and submit results so you get warm fuzzies knowing kolla will always work on top of the CDK which is the officially supported method to deploying k8s on ubuntu.
16:53:02 <lazyPower> yeah, sorry was typing out that short novel ^
16:53:13 <sdake> lazyPower are you the fella we met at the PTG from cononical?
16:53:16 <kfox1111> inc0: ubuntu has a product now for k8s.
16:53:18 <lazyPower> sdake: correct
16:53:28 <sdake> lazyPower nice to see ya onnline and in the community :)
16:53:42 <lazyPower> sdake: well one of about 3 engineers that were in/out of the room anyway :) but yeah i was the primary driver of the talks for CI integration
16:53:51 <kfox1111> lazyPower: yeah, that would be cool.
16:53:54 <sdake> lazyPower we have typically stayed away from third party gating, although this is because people misunderstand that at some point it becomes voting
16:53:54 <lazyPower> Thanks sdake, glad to be here
16:53:55 <Jeffrey4l_> mancdaz, will review the specs. thanks for the work.
16:53:56 <inc0> Jeffrey4l_: we should release z streams for ocata at the same time
16:54:10 <sdake> lazyPower so as long as you understand (infra) wont allow it to be voting, i think that is a great idea
16:54:48 <inc0> lazyPower: what we need is k8s and ceph deployed, can you provide this part?
16:54:59 <Jeffrey4l_> yes. but i am thinking leave another week for ocata branch.
16:55:07 <lazyPower> sdake: this is contrary information from what iw as told, which is not entirely a deal breaker but this means i have to take this new info back to management so they can weigh in. but i dont think this will be a hug eissue
16:55:17 <Jeffrey4l_> will release 4.0.1 next week.
16:55:21 <lazyPower> inc0: we can, we already deploy k8s + have relations to ceph for RBD based PV enlistment
16:55:25 <sdake> lazyPower openstack infrstruture just wont allow third party gates to vote
16:55:33 <sdake> lazyPower its not something this team has any control over
16:55:50 <lazyPower> sdake: thats understandable. I came back from the PTG with the understanding that once integrated we could be a voting gate
16:55:54 <sdake> lazyPower by vote I mean block a review
16:55:55 <inc0> lazyPower: alternatively to 3rd party non voting we can just write it in infra
16:55:59 <lazyPower> whichi means i got bad info, and i'm going to have to ammend that
16:56:00 <inc0> as regular infra gates
16:56:17 <sdake> lazyPower join me in openstack-infra after, and we can confirm with the team
16:56:21 <lazyPower> sure thing
16:56:23 <inc0> I understand it's all opensource
16:56:29 <sdake> lazyPower right - not sure who you got th ebad info from, this is why we hav eavoided third party gates in the past :)
16:56:43 <lazyPower> Thats basically all I had, was to get the ball rolling here, and see what follow up steps we needed to take to even get started.
16:57:03 <sdake> lazyPower followup step is communicating with openstack-infra about how to do third party gating
16:57:20 <sdake> lazyPower asuming your fine with a nonvoting gate
16:57:26 <kfox1111> lazyPower: thanks for working on it. more testing on things users really care about is always a good thing. :)
16:57:56 <sdake> lazyPower happy to facilitate the conversation if anyone is around in openstack-infra in about 1 hr :)
16:58:09 <sdake> lazyPower or any of the cores can do it now if they are available
16:58:15 <inc0> I'll join you in a second there
16:58:27 <sdake> inc0 can handle it then, i've got a meeting in 2 minutes :)
16:58:30 <inc0> anyway, we're reacing the end times
16:58:42 <inc0> ok, in 1 hr then
16:58:43 <sdake> END OF DAYS -- DADA
16:58:57 <sdake> inc0 infra knows you as well as me, you can handle it :)
16:58:59 <inc0> thank you all for coming
16:59:02 <inc0> kk
16:59:03 <sdake> thanks everyone
16:59:08 <inc0> #endmeeting kolla