16:00:09 <inc0> #startmeeting kolla
16:00:12 <openstack> Meeting started Wed Dec 13 16:00:09 2017 UTC and is due to finish in 60 minutes.  The chair is inc0. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:13 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:15 <inc0> #topic w00t!
16:00:16 <duonghq> o/
16:00:16 <openstack> The meeting name has been set to 'kolla'
16:00:18 <inc0> o/
16:00:18 <hrw> o/
16:00:25 <chason> o/
16:00:49 <Jeffrey4l> o/
16:01:12 <egonzalez> Woot  o/
16:01:51 <coolsvap> o/
16:01:52 <rwellum> o/
16:02:21 <cleong> o/
16:03:56 <inc0> ok let's start
16:04:02 <inc0> #topic announcements
16:04:15 <inc0> we released 5.0.2 and 4.0.3:)
16:04:42 <Jeffrey4l> hrm, this is still WIP. not done ;(
16:05:20 <hrw> inc0: http://git.openstack.org/cgit/openstack/kolla/refs/tags does not show those tags. nor 5.0.1 one
16:05:21 <inc0> oh
16:05:24 <inc0> ?
16:05:54 <Jeffrey4l> hrw no tag image right now. and other images is pushed manually.
16:05:55 <rwellum> The tag is 'pike' I think
16:06:34 <Jeffrey4l> other tag images are pushed manually.
16:06:42 <inc0> well, git tags are pushed manually
16:06:46 <hrw> ok
16:06:52 <inc0> ehh, sorry, my brain no worky
16:07:31 <Jeffrey4l> and seems ceph lunimouse issue is not solve in pike ubuntu.
16:07:32 <hrw> just being picky as if something gets released as x.y.z then it has to be x.y.z tag to be sure which commit release was
16:07:33 <inc0> git tags aren't there because release team postes few questions in our release change
16:07:38 <inc0> yeah
16:07:58 <inc0> that too
16:08:56 <inc0> ok let's move to meeting
16:08:59 <hrw> community announcements?
16:09:03 <inc0> aaaa
16:09:06 <inc0> right
16:09:13 <hrw> bbiab
16:09:15 <inc0> community announcements?:) thanks hrw
16:09:51 * hrw in car service
16:09:56 <inc0> guess not
16:10:05 <hrw> Linaro started building queens
16:10:20 <inc0> nice
16:10:22 <hrw> aarch64 only, debian/source only
16:10:27 <inc0> which is master right?
16:10:30 <hrw> all pushed to hub.docker.com/linaro
16:10:33 <hrw> yes. master
16:10:34 <inc0> cool
16:10:41 <hrw> we use queens-YYYYMMDD as tag
16:10:51 <Jeffrey4l> should we move these image to hub.docker.com/kolla namespace?
16:11:03 <inc0> at some point, now they're not testable
16:11:13 <hrw> Jeffrey4l: once we get to usable point
16:11:14 <inc0> I'm not sure if they're buildable in gates
16:11:28 <hrw> build was yesterday and we just started tests
16:11:29 <inc0> gates are all amd64
16:11:44 <inc0> ...until we get arm nodepool
16:11:50 * inc0 looks at hrw and gema intently
16:11:51 <Jeffrey4l> cool.
16:12:01 <hrw> https://review.openstack.org/#/c/527726/ is needed to get rabbitmq buildable
16:12:18 <hrw> https://review.openstack.org/#/c/527712/ is rabbitmq itself
16:12:18 <inc0> let's talk about it in regular agenda item
16:12:22 <hrw> ok
16:12:30 <inc0> #topic remove tarballs.o.o images with a migrate note.
16:12:33 <hrw> give me 5 minutes for service talk
16:12:45 <inc0> Jeffrey4l: you added this?
16:12:51 <Jeffrey4l> yes.
16:13:13 <inc0> yeah I agree with this, we should remove tarball images, although they never meant to be used
16:13:19 <Jeffrey4l> now the images on tarballs.o.o is not updated since Oct.
16:13:27 <inc0> so I guess we can just delete it
16:13:33 <Jeffrey4l> yes.
16:14:04 <Jeffrey4l> need ask help from infra team and better leave a index.html file to show the user,
16:14:24 <Jeffrey4l> he can down images from hub.docker.com right now.
16:14:47 <inc0> yeah agree
16:15:06 <inc0> allright any other comments?:)
16:15:08 <Jeffrey4l> cool. that's all about this topic.
16:15:25 <inc0> ok cool
16:15:32 <inc0> #topic open discussion
16:15:40 <inc0> soo Linaro images
16:15:49 <inc0> until we get gating on them, we can't build tem
16:16:10 <hrw> re
16:16:16 <inc0> currently we don't have nodepool with arm, so we need to wait for it to happen first
16:16:24 <Jeffrey4l> is there any roadmap that openstack infra to support arm64? or use thirty part ci?
16:16:41 <inc0> someone has to donate arm nodepool
16:16:47 <inc0> that's all there is to it
16:17:14 <inc0> well that and maybe we'll need some way to schedule arm jobs on arm nodepools, not sure if zuul can do it today
16:17:27 <hrw> last time I heard we are blocked by size of our lab when it comes to new machines
16:17:46 <inc0> how about your public cloud?
16:17:46 <hrw> I expect situation to sort out 'soon'
16:18:14 <hrw> would have to ask how we are there. was at ~90% of capacity last time
16:18:28 <inc0> just so you know, infra only needs tenant
16:18:47 <hrw> whatever
16:18:55 <inc0> but yeah
16:19:02 <inc0> after that's done
16:19:07 <inc0> we can easily add arm jobs
16:19:13 <hrw> I recently handled 2 build slaves for bigtop jenkins. but that was from bigdata team resources
16:19:13 <inc0> and arm publisher if need be
16:20:13 <hrw> I am aware
16:20:36 <inc0> sooo anything else on that topic?:)
16:21:18 <hrw> imho not
16:21:27 <inc0> ok
16:21:36 <Jeffrey4l> about pike next realese?
16:21:39 <inc0> any other topics for open discussion?:)
16:21:51 <inc0> what do you mean Jeffrey4l
16:21:56 <inc0> ?
16:22:11 <Jeffrey4l> how about release next z stream now and solve the ceph luminous later.
16:22:16 <Jeffrey4l> guess it can not be solve soon.
16:22:34 <inc0> well not sure
16:22:37 <hrw> how we are with ceph at all?
16:22:49 <inc0> gates aren't there yet
16:22:49 <Jeffrey4l> next z stream is 5.0.1, which is delayed for long.
16:23:06 <inc0> but before zuulv3 they were up
16:23:15 <inc0> not sure what's wrong, still debugging
16:23:24 <Jeffrey4l> hrw, ubuntu UCA is upgrade to ceph luminoue at the end of the pike cycle.
16:23:33 <Jeffrey4l> and we have no time to implement this in pike.
16:23:44 <rwellum> I have something inc0 when we have a chance.
16:23:47 <Jeffrey4l> so we pin to ceph Jewel.
16:24:16 <Jeffrey4l> whereas, ubuntu seems hard to pin the ceph install version. ;(
16:24:45 <hrw> as long you are fine with qemu 2.5 it is not that hard
16:24:53 <inc0> well
16:24:59 <inc0> there were issues with that
16:25:16 <inc0> and for some reason uca also pinned qemu 2.6 to ceph l
16:25:19 <hrw> I had a patch in review which went to jewel images for ubuntu
16:25:34 <hrw> inc0: 2.5 is in xenial repos
16:25:48 <inc0> 2.6 is in UCA
16:25:58 <Jeffrey4l> #link https://review.openstack.org/527339
16:26:03 <Jeffrey4l> hrw check this ^^
16:26:47 <inc0> egonzalez: you up?
16:26:56 <egonzalez> Yep
16:27:00 <hrw> https://review.openstack.org/#/c/505786/3 was my last attempt
16:27:09 <hrw> on same stuff
16:27:23 <inc0> problem I'm having is ubuntu-binary is often broken
16:27:30 <inc0> now ubuntu-source is broken as well
16:27:43 <inc0> I mean now it works, but after we rever it'll break
16:28:18 <hrw> for R release we should move to use ceph-ansible and then ceph is no longer our problem?
16:28:39 <inc0> we agreed to release L
16:28:42 <inc0> ceph L
16:28:45 <Jeffrey4l> hrw, if you use external-ceph, you can use ceph-ansible to deploy the ceph.
16:28:58 <inc0> that would mean we can deprecate it by the end of Queens
16:29:13 <hrw> inc0: I mean ceph L in queens, ceph-ansible in openstack-R-release
16:29:14 <inc0> and remove it in S
16:29:15 <Jeffrey4l> inc0, deprecated what?
16:29:42 <inc0> R will still have ceph because of deprecation policy
16:29:53 <inc0> Jeffrey4l: on ptg we discussed dropping ceph from kolla-ansible
16:30:09 <Jeffrey4l> hrm :/
16:30:26 <inc0> we decided we will discuss it again
16:30:37 <hrw> ptg. have to check are there new details and ping boss
16:30:45 <inc0> reason is, ceph-ansible is better at deploying ceph than we are
16:31:11 <Jeffrey4l> ok. i will check ceph-ansible later.
16:31:27 <Jeffrey4l> never tried that project
16:31:31 <inc0> basically focus on external ceph rather than deployed by us
16:31:41 <inc0> you also have ceph-deploy which works well
16:32:26 <Jeffrey4l> yes. should improve external ceph support
16:32:42 <inc0> so idea was
16:32:50 <inc0> we deprecate Ceph in k-ans
16:33:01 <inc0> provide migration path to ceph-ansible
16:33:28 <hrw> have to go. thanks for discussion guys
16:33:34 <inc0> cya hrw
16:33:52 <Jeffrey4l> hrw, bye
16:34:29 <inc0> anyway, discussion for later, we need L in Queens
16:35:01 <Jeffrey4l> the ceph lunimous patch is merged already.
16:35:10 <inc0> oh is it?
16:35:12 <Jeffrey4l> and other related patch is under reviewing.
16:35:18 <Jeffrey4l> just image part.
16:35:19 <inc0> mind linking please?
16:35:22 <inc0> ah
16:35:25 <Jeffrey4l> kolla-ansible is not merged
16:35:42 <serlex> https://review.openstack.org/#/q/topic:bp/ceph-luminous+(status:open+OR+status:merged)
16:35:50 <Jeffrey4l> serlex, thanks
16:36:05 <inc0> all merged
16:36:21 <Jeffrey4l> upgrade is not tested. but it should work.
16:36:27 <Jeffrey4l> will test this later.
16:36:41 <inc0> nice
16:36:43 <inc0> soo
16:36:46 <egonzalez> Cool
16:37:00 <inc0> are we done?:)
16:37:03 <duonghq> o/
16:37:12 <inc0> hey duonghq got anyhting?
16:37:15 <rwellum> o/
16:37:18 <inc0> ahh
16:37:20 <inc0> Rich first
16:37:23 <inc0> sorry man
16:37:28 * inc0 is not fully awake
16:37:33 <inc0> rwellum: shoot
16:37:52 <rwellum> Can we enforce somehow, that if bugs are fixed in Services in kolla-ansible, that at least the bugs are reported against kolla-k8s? Because I spend a lot of time fixing issues, that were fixed a long time ago in kolla-ansible.
16:38:19 <rwellum> Has hit me in Pike and Master. Don't expect ports, but would be great to know what's been fixed.
16:38:22 <inc0> yeha that's what my worry when we moved genconfig to k-k8s
16:38:27 <Jeffrey4l> rwellum, at this point, we need a cross jobs to connect kolla and kolla-kubernetes.
16:39:21 <Jeffrey4l> i guess the issue is not exported because kolla-k8s is still using the old images? or ci is broken?
16:39:23 <inc0> that was also the argument - gates would fail
16:39:29 <inc0> I guess somehow tney didnt?
16:39:50 <rwellum> Some of it is new features, and fixes for those features. Some of it is non-voting gates
16:40:33 <Jeffrey4l> normally it is hard to solve such kind of issue by enforcing human. better solve through code/ci.
16:42:00 <rwellum> Agreed. Just rasing awareness. It's been a couple of releases since that genconfig was moved and I see a wide disparity between the 2 projects.
16:42:30 <inc0> that might be our que to move configs to helm tbh
16:42:42 <rwellum> I never really understand why I see more issues that the gates, my assumption is the gates have work-arounds I don't see.
16:42:47 <rwellum> +1 inc0
16:43:25 <rwellum> That's it from me. I had a question about the tagging from above - but I can ask that outside of meeting. ty.
16:43:44 <inc0> duonghq: you have the floor
16:43:49 <duonghq> thank inc0
16:44:01 <duonghq> I just try to deploy cinder with NFS backend,
16:44:36 <duonghq> and I got issue about permission (seem that our cinder require the dir is own by cinder user, but current is nobody)
16:44:50 <duonghq> I'm not sure it's only my issue or the Kolla one,
16:44:54 <duonghq> I follow our guide
16:45:10 <egonzalez> There is a change in review queue about something similar
16:45:42 <egonzalez> https://review.openstack.org/#/c/514639/
16:46:47 <duonghq> I'll try it, but it think it doesn't solve my issue (I workaround by set the dir mode to 777)
16:46:49 <Jeffrey4l> seems the same
16:47:35 <Jeffrey4l> duonghq, if not, could you file a bug with a detailed description?
16:47:46 <duonghq> Jeffrey4l, sure
16:47:47 <duonghq> thank you
16:48:05 <Jeffrey4l> guess it can not be solve in the meeting ;)
16:48:36 <duonghq> ya, I'll try as your suggestion
16:49:09 <Jeffrey4l> ok.
16:49:15 <Jeffrey4l> anything else? duonghq
16:49:15 <inc0> ok, duonghq anything else in this topic?
16:49:26 <duonghq> no, I'm done
16:49:30 <Jeffrey4l> i want to talk docker images squash.
16:49:36 <inc0> go ahead
16:49:46 <Jeffrey4l> this is talked before in ptg or summit.
16:49:51 <inc0> yeah
16:49:56 <inc0> this keeps coming back
16:50:01 <mrhillsman> i have something
16:50:01 <inc0> way to shring images
16:50:05 <mrhillsman> octavia :)
16:50:08 <mrhillsman> lbaasv2
16:50:08 <Jeffrey4l> some one complained that kolla images have much layers.
16:50:18 <inc0> mrhillsman: after Jeffrey4l if we have some time
16:50:22 <mrhillsman> does it work out the box - change yes to no
16:50:23 <mrhillsman> ok
16:50:24 <Jeffrey4l> here is a approach to solve the issue https://review.openstack.org/525871
16:50:25 <inc0> we can talk on regular channel too
16:50:31 <mrhillsman> ;)
16:50:47 <Jeffrey4l> but it will consume much lots of extra IO.
16:50:49 <inc0> issue with that approach Jeffrey4l is - look at image build time
16:50:51 <inc0> righrt
16:51:09 <inc0> so it's really tradeoff
16:51:09 <Jeffrey4l> it increased the build time.
16:51:17 <Jeffrey4l> http://logs.openstack.org/71/525871/10/check/kolla-build-ubuntu-source/bbbde53/logs/docker-info.txt.gz
16:51:24 <Jeffrey4l> you can see the layers from here.
16:51:35 <inc0> also squash afair was experimental in 1.13
16:51:50 <inc0> and squash alone won't shring images
16:51:56 <inc0> we'd also need to add cleanups
16:52:03 <Jeffrey4l> inc0, no. docker squash is still experimental feature. and it won't works as expected.
16:52:11 <inc0> which will increase image size without squash
16:52:18 <Jeffrey4l> inc0, no extra cleanup in fact.
16:52:25 <inc0> oh?
16:52:48 <Jeffrey4l> it is jusing docker-squash tool and it will overwrite the image and delete the old image after squashed.
16:52:48 <inc0> well, I thought biggest benefit of squash is that we can remove files and such
16:52:56 <inc0> ah
16:53:00 <inc0> not that kind of cleanup
16:53:25 <inc0> what I'm saying is for squash to really work we'd need to rm all the files irrelevant to end image
16:53:30 <inc0> like apt-cache and such
16:53:35 <Jeffrey4l> i get what you mean ;)
16:53:37 <Jeffrey4l> yes.
16:53:40 <inc0> and squash afterwards
16:53:54 <Jeffrey4l> we can remove them and do the cleanup in the leaf images.
16:54:08 <Jeffrey4l> mrhillsman, yes, it works and need some extra configuration.
16:54:11 <inc0> well base images still going to be big
16:54:35 <Jeffrey4l> oh, seem not move the cleanup to the end of the image.
16:54:48 <inc0> well, end of every image
16:54:50 <Jeffrey4l> oh yes. should move the cleanup to the *
16:54:51 <inc0> would work
16:55:01 <inc0> but that's even longer
16:55:12 <Jeffrey4l> yes. a extra jinja macro should works.
16:55:19 <inc0> right
16:55:27 <inc0> let's keep experimenting
16:55:39 <Jeffrey4l> anyway, this another topic.
16:55:53 <inc0> yeah
16:56:02 <inc0> ok, we have 4 min left
16:56:04 <krtaylor> I have a topic since Jeffrey4l and inc0 are here - the build config override bug
16:56:04 <inc0> mrhillsman: shoot
16:56:14 <Jeffrey4l> but first step, we can implement to solve the much layers issue. in case anybody want to use this.
16:56:16 <inc0> krtaylor: on kolla channel in a second:)
16:56:29 <krtaylor> sounds good
16:56:32 <mrhillsman> well, i switched lbaasv2 and octavia to yes
16:56:50 <mrhillsman> but the neutron_server container just keeps restarting
16:56:51 <inc0> if it is in globals it *should* work ootb
16:57:00 <inc0> but we don't have gates for octavia
16:57:10 <mrhillsman> saying it cannot find the plugins
16:57:13 <inc0> so probably a bug? let's debug in regular channel
16:57:14 <Jeffrey4l> mrhillsman, i can give you some doc about this.
16:57:17 <mrhillsman> even just lbaasv2 without octavia
16:57:29 <mrhillsman> same, says it cannot find what it needs (plugin)
16:57:33 <Jeffrey4l> mrhillsman, restarting that sound like a bug.
16:57:38 <mrhillsman> just wanted to check before i started digging into it more
16:57:50 <mrhillsman> Jeffrey4l that doc would be great
16:58:00 <Jeffrey4l> mrhillsman, it should work. we tested, at least for ocata release.
16:58:15 <mrhillsman> i just wanted to have proper expectations before troubleshooting
16:58:23 <inc0> usual question is - did you build images yourself?
16:58:31 <inc0> anywyay
16:58:36 <inc0> let's move to kolla channel
16:58:38 <mrhillsman> followed the guide on building
16:58:39 <mrhillsman> ok
16:58:39 <Jeffrey4l> mrhillsman, check this http://paste.openstack.org/show/628874/
16:58:40 <inc0> we ran out of time
16:58:48 <inc0> thank you all for coming!
16:58:53 <inc0> #endmeeting kolla