16:31:59 <sdake> #startmeeting kolla
16:31:59 <sdake> #topic rollcalls
16:32:00 <openstack> Meeting started Wed May 18 16:31:59 2016 UTC and is due to finish in 60 minutes.  The chair is sdake. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:32:01 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:32:03 <openstack> The meeting name has been set to 'kolla'
16:32:09 <inc0> o/
16:32:10 <vhosakot> o/
16:32:16 <mandre> \o/
16:32:19 <Jeffrey4l> o/
16:32:24 <nihilifer> \o
16:32:46 <eghobo> o/
16:32:58 <jpeeler> hi
16:33:21 <mlima> \o/
16:33:37 <coolsvap> hello
16:33:45 <sdake> o/
16:33:59 <sdake> #topic announcements
16:34:26 <sdake> 1. I am working on securing us space for midcycle at Ansible HQ in Durham
16:34:38 <pbourke-mobile2> Hi
16:34:47 <sdake> if that falls though ryan has suggested boston as a possibility
16:34:57 <sdake> i will know by friday if that has fallen through
16:35:18 <sdake> 2. we are getting something like 130 nodes to do kolla scale testing with
16:35:24 <sdake> provided by osic
16:35:30 <pbourke-mobile2> cool
16:35:31 <sdake> inc0  mind adding anything on that topic?
16:35:44 <inc0> well
16:35:47 <inc0> it's not official yet
16:35:59 <vhosakot> yes, Boston is great... near my house! :)
16:36:02 <sdake> oh i thought it was
16:36:07 <inc0> but there is good possibility that we'll get 132 physical nodes to perform tests we want for 3 weeks
16:36:32 <vhosakot> sdake: is 130 nodes at the gate ?
16:36:42 <rhallisey> hey
16:36:46 <sdake> vhosakot no its bare metal accesible via internets
16:36:49 <rhallisey> sorry a little late
16:36:53 <inc0> vhosakot, no, its 130 powerful physical servers we'll have ipmi access to
16:37:14 <sdake> by powerful he means 2 xeon processor 256gbram+
16:37:19 <dave-mccowan> o/
16:37:20 <vhosakot> inc0: sdake: how to we use it ? for dev/unit-testing/local scale testing ?
16:37:25 <inc0> so we can deploy kolla on them, deploy thousands of vms on it and deploy multi-k kolla on vms;)
16:37:31 <sdake> no we have it for a short period - 3 weeks
16:37:44 <inc0> we need to utilize this time to maximum
16:37:48 <sdake> i think we need to have further discussions on what we wwant to do with that gear
16:38:00 <sdake> so we can brainstorm in the main topic
16:38:04 <sdake> onto other announcements
16:38:11 <vhosakot> deploy multinode on that and run some rally tests
16:38:16 <inc0> so let's prep tests beforehand, get people working around the clock (thank you timezones) and squeeze as much info as we can
16:38:41 <sdake> ok lets brainstorm later
16:38:43 <mandre> sounds like a plan
16:39:16 <sdake> 3. mlima is up for core reviewer, please vote or abstain - if you plan to abstain please let me know personally so I wont be expecting your vote one way or another tia ;)
16:39:21 <vhosakot> cool, later.. need to discuss how to bare-metal boot them first before deployng kolla on them
16:39:30 <sdake> #topic newton-1
16:40:22 <sdake> #ink http://releases.openstack.org/newton/schedule.html
16:40:51 <sdake> everyone has had a nice long two week vacation hopefully after summit
16:40:57 <sdake> its time to get to work on our objectives for newton
16:41:13 <sdake> everyone has seen the mailing list mail regarding our commitments made at summit
16:41:17 <sdake> I'd like to atleast do that work ;)
16:41:32 <sdake> newton 1 is scheduled for the end of May
16:41:44 <sdake> we have about 1-2 weeks left, and we haven't done alot of work
16:42:07 <sdake> this is normal, as our work is backloaded because we are a relesae:cycle-trailing project
16:42:33 <sdake> also we have a general commitment to the openstack community to release z streams
16:42:43 <sdake> projects typically do these every 45 days
16:42:57 <sdake> around milestone deadlines
16:43:13 <sdake> we have a slew of bugs in newton-1
16:43:26 <sdake> what i'd suggest is working on bugs that are in newton-1 that are targeted for backporting
16:43:35 <sdake> or working on any of our many objectives
16:43:43 <sdake> anyone have any questions or is blocked in any way?
16:44:17 <vhosakot> what/where are the bugs targeted for newton-1 ?
16:44:30 <coolsvap> vhosakot, https://launchpad.net/kolla/+milestone/newton-1
16:44:44 <vhosakot> cool.. thanks!
16:44:51 <sdake> #link https://launchpad.net/kolla/+milestone/newton-1
16:44:58 <coolsvap> i for one am waiting for some resolution to the rabbitmq but so that we can have some reliance on gates
16:45:01 <sdake> by "slew" I mean 200+ :)
16:45:16 <coolsvap> its really big list
16:45:19 <sdake> coolsvap ya both Jeffrey4l and I are working on that porblem
16:45:25 <vhosakot> it has bp's + bugs
16:45:34 <coolsvap> sdake, yes
16:46:08 <sdake> #topic kolla-kubernetes bootstrapping
16:46:23 <sdake> so I'm not sure what Ryan had in mind here
16:46:27 <sdake> take it away :)
16:46:30 <rhallisey> :)
16:46:48 <rhallisey> I wanted to have a discussion about bootstrapping in kolla-kube
16:46:50 <rhallisey> https://review.openstack.org/#/c/304182/
16:47:05 <rhallisey> lines 79-108
16:47:19 <rhallisey> there are 3 scenarios around bootstrapping
16:47:44 <rhallisey> I'll highlight the problem with the main discussed aproach
16:48:13 <rhallisey> the idea of having a bootstrap task in kolla-kubernetes may be problematic
16:48:26 <rhallisey> because kubernetes handles upgrades in a single operation
16:48:33 <rhallisey> scale down -> scale up
16:49:08 <rhallisey> if we don't have db_sync built into our container, we can't use kubernetes to handle service upgrades.  It would have to be done by ansible
16:49:17 <rhallisey> or some other orchestration
16:49:35 <sdake> during upgrades, kubernetes can't run a special container?
16:49:51 <sdake> there surely must be a way to do some migation work
16:50:04 <vhosakot> rhallisey: so, can we use kubernetes upgrade to do bootstrapping (initialize the db and create user) ?
16:50:08 <rhallisey> that would require some orchestreation
16:50:28 <inc0> non trivial one as well
16:50:32 <sdake> isn't there a jobs object?
16:50:32 <rhallisey> ya
16:50:55 <nihilifer> rhallisey: so,  do you see any alternatives more than implementing all the "bootstrap" inside start scripts?
16:50:58 <inc0> technically we could run db_sync every time
16:51:00 <rhallisey> sdake, there is, but we can't just run a job when kubernetes thinks it's time to upgrade
16:51:21 <nihilifer> and run every time of course
16:51:22 <sdake> sounds like a  gap in the kubernetes implementaiton
16:51:31 <eghobo> rhallisey: why not take you initial approach and start the job which will do database upgrade
16:51:33 <rhallisey> nihilifer, I don't other then orchestration and not using native kubernetes
16:51:56 <sdake> imo we should try to use native k8s as much as possible for kolla-kubernetes
16:52:07 <vhosakot> yes.. good point
16:52:09 <pbourke-mobile-3> Could you optionally bootstrap + start in one go
16:52:12 <sdake> and if it has gaps we should feed those to upstream to fix them
16:52:20 <rhallisey> pbourke, yes
16:52:28 <nihilifer> yes, and if not possible - try to push for features in k8s (i'll have some "announce" on that on the end of meeting ;) )
16:52:34 <rhallisey> the solution I'm talking about is:
16:52:57 <rhallisey> bootstrapping will bootstrap the db with a special container in the pod.  Then the service runs after
16:53:27 <rhallisey> upgarde will do a db_sync in a special container, then services get scaled back up
16:53:33 <sdake> rhallisey seems like this is more of a ml discussion :)
16:53:39 <inc0> other issue is, upgrade is more complex than just db_sync
16:53:51 <rhallisey> inc0, agreed
16:53:53 <inc0> for example nova requires some more logic to make it rolling
16:53:57 <sdake> i'd focus on getting AIO deployed first, upgrade second ;)
16:54:07 <rhallisey> sdake, ya I agree. Just wanted to highlight this for everyone
16:54:11 <sdake> there are a slew of problems with just deploying k8s
16:54:15 <eghobo> rhallisey: could you elaborate pod vs job?
16:54:15 <rhallisey> so keep an eye on the spec
16:54:39 <rhallisey> eghobo, pod is a group of running containers. A job is a task the kubernetes wil execute until completion
16:55:04 <rhallisey> sdague, ya i agree. I just don't want to make the wrong decision with bootstrapping and have to remake it with upgrades
16:55:04 <eghobo> rhallisey: i know it ;)
16:55:13 <sdake> rhallisey i think what we need in the upgrade construct is the ability to run a job
16:55:33 <sdake> rhallisey its ok to experiment, i expec thits project will take a long timee to mature
16:55:54 <eghobo> rhallisey: i am curious why do you need upgrade through special pod vs job
16:55:55 <sdake> look at all the changes we went through with kolla
16:55:57 <rhallisey> sdake, ya a job would be ideal.  It would tie it well to inc0's point
16:56:05 <sdake> since the beginning we hvae changed architectures atleasst 4 time :)
16:56:18 <rhallisey> eghobo, it wouldn't be a special pod. It would be the same pod used during a deployment
16:56:45 <vhosakot> rhallisey: so, are you saying the native k8s 'upgrade' job can be used to bootstrap services, and a separate task bootstrap.yml is not needed ?
16:57:36 <rhallisey> vhosakot, I'm saying that we need to build into these tasks: db_sync, <some upgrade magic> into the deployment template
16:57:43 <sdake> #action rhallisey to start thread related to upgrade of kolla-kubernetes (the openstack part) :)
16:57:46 <rhallisey> the deployment template describe the pod
16:57:52 <vhosakot> ah ok
16:57:58 <rhallisey> sounds good sdake thanks
16:58:00 <sdake> sure
16:58:05 <rhallisey> just wanted to raise awareness
16:58:05 <sdake> #topic ansible 2.0
16:58:20 <sdake> we want it all, and we want it now
16:58:30 <rhallisey> #link https://blueprints.launchpad.net/kolla/+spec/ansible2
16:58:45 <Jeffrey4l> this PS works. https://review.openstack.org/317421
16:58:54 <rhallisey> nice Jeffrey4l
16:58:59 <Jeffrey4l> may need some small works.
16:59:00 <sdake> cool Jeffrey4l already did the job
16:59:21 <rhallisey> so based off the patch above, people can start grabbing services  ans converting the tasks
16:59:40 <inc0> rhallisey, not really, you don't need to convert anything
16:59:48 <inc0> plays works as used to
16:59:54 <rhallisey> oh that will work as is
17:00:04 <inc0> yup
17:00:07 <rhallisey> excellent
17:00:10 <Jeffrey4l> correct. the gate is green now.
17:00:19 <rhallisey> cool nice Jeffrey4l
17:00:21 <vhosakot> nice Jeffrey4l!
17:00:23 <inc0> however we can do more, ansible 2 have some stuff that we may use
17:00:23 <rhallisey> got nothing else
17:00:33 <inc0> refactor our modules to be more ansible 2'ish
17:00:38 <sdake> ya the execution strategies
17:00:49 <pbourke-mobile-3> I think theres some todos around "once we have ansible 2"
17:00:51 <rhallisey> thought there would need to be some task changing
17:01:17 <rhallisey> ok well whatever they are, can they be highlighted in the blueprint?
17:01:35 <pbourke-mobile-3> Set an action and ill have a look
17:01:36 <sdake> need folks to add work items to the blueprint
17:01:42 <vhosakot> refactor meaning use the new modules in Ansible 2.0 ?
17:01:56 <sdake> #action eveyrone to add their ideas for ansible 2.0 to work items in the ansible2 blueprint
17:02:10 <vhosakot> cool
17:02:14 <pbourke-mobile-3> Thanks!
17:02:18 <sdake> vhosakot we are never using ansible's docker module again
17:02:22 <sdake> too painful
17:02:30 <sdake> i prefer to be in control of our own destiny
17:02:38 <vhosakot> cool
17:03:27 <sdake> #topic gating is busted
17:03:43 <sdake> centos in some way is broken
17:03:48 <sdake> i think jeffrey4l has a fix
17:04:01 <sdake> Jeffrey4l mind describing what you knwo about the issue
17:04:25 <Jeffrey4l> yes. i am still making some test for this.
17:05:00 <Jeffrey4l> PS 30 show the current issue https://review.openstack.org/#/c/315860/30
17:05:21 <Jeffrey4l> the hostname is not changed as expected on the centos gate.
17:05:59 <Jeffrey4l> i have no idea why. :(
17:06:17 <Jeffrey4l> because i can not reproduce it locally.
17:06:20 <sdake> our speculation is that ansible hostname module is broken
17:06:33 <sdake> possibly related to selinux
17:06:56 <sdake> because shell sudo hostname change works right Jeffrey4l ?
17:07:00 <coolsvap> i am thinking on similar lines and similarly i am not able to reproduce it locally
17:07:10 <Jeffrey4l> sdake, correct.
17:07:23 <sdake> Jeffrey4l ok well lets unblock the gate asap
17:07:39 <Jeffrey4l> yes.
17:07:43 <sdake> and if you want to further debug - revert the patch in your review above
17:07:49 <sdake> (the patch that fixes teh problem)
17:07:56 <sdake> that way you can debug if you like
17:08:18 <Jeffrey4l> anyway, I will push the workaround soon.
17:08:30 <sdake> Jeffrey4l if you can get a shell sudo up for review, and it passes the gate, i'd like quick reviews on it today pls :)
17:08:41 <Jeffrey4l> roger.
17:08:46 <sdake> it is sort of blocking all dev
17:08:57 <sdake> because our core team rightly is trained not to ack changes that don't pass the gate
17:09:20 <Jeffrey4l> i am working on that. the latest PS is trying to fix the gate.
17:09:24 <sdake> #topic threat analysis
17:09:35 <sdake> at summit we had a 4 hour threat analysis session
17:09:38 <sdake> we got about half way done
17:09:45 <sdake> we identified all the modes of operation of our containers
17:09:50 <sdake> and identified the special snowflakes
17:10:05 <sdake> if your on this list:
17:10:07 <sdake> #link https://launchpad.net/~kolla-coresec/+members
17:10:37 <sdake> and no longer want to participate in the obtaining the VMT tag because you have other responsibilites now, pleaes let me know asap
17:10:43 <sdake> so i can recruit others
17:10:45 <sdake> we need a team of 5 people
17:11:01 <sdake> i'd like people on this team to prioritize this work ahead of other work
17:11:22 <sdake> any questions mandre nihilifer inc0 rhallisey ?
17:11:47 <sdake> the next step is for everyone on that list to attend the next openstack security meeting
17:11:59 <sdake> and attend for 4-6 weeks while we finish the job on TA
17:12:43 <sdake> after our VMT is done, if anyone would like to act as liason to the VMT and security teams (in place of me) I would super appreciate a volunteer
17:12:58 <inc0> where can we find list of stuff left to be done?
17:13:00 <sdake> I am super overloaded with all the liasion stuff going on inside openstack atm
17:13:19 <sdake> inc0 there is no list - something we need to define in the security meeting thursday :)
17:13:31 <inc0> ok
17:13:38 <sdake> or altnerively on the mailing list
17:13:51 <sdake> we need to make some sequence diagrams
17:13:54 <sdake> one is already made
17:14:03 <sdake> and then someone needs to wrap it up in ap retty document
17:14:11 <sdake> I'll be happy to do the wrapping
17:14:13 <sdake> but need assistance on the diagrams
17:14:14 <mandre> security team meeting happening on Thu 17:00 UTC
17:14:16 <mandre> http://eavesdrop.openstack.org/#Security_meeting
17:15:08 <sdake> #topic open discussion
17:15:15 <sdake> nihilifer you had soehting you wanted to announce?
17:16:11 <sdake> anyone else have open items they would like to discuss?
17:16:15 <nihilifer> yes. i'd like to say that i will participate in k8s community itself - which includes writing code as well
17:16:43 <sdake> nihilifer so I think what this means is you will be less involved in kolla
17:16:43 <nihilifer> so if you have any features you would like to see in k8s, don't hesitate to contact me
17:16:54 <rhallisey> nick-ma, oh sweet
17:16:58 <inc0> cool
17:17:02 <rhallisey> nihilifer, nic
17:17:04 <rhallisey> nice
17:17:06 <rhallisey> nick-ma, sorry
17:17:17 <rhallisey> so I talked to an ansible guy I know
17:17:26 <nihilifer> yes, unfortunately it means i'll be less active in kolla, but i'll try to keep some reviews and commits
17:17:46 <nihilifer> to not outstand badly from the core team ;)
17:17:51 <coolsvap> nihilifer, nice all the best!
17:17:53 <sdake> nihilifer cool - well thanks for your service thus far, and hope you can stick with it, if not I totally understand the two projects at one time thing
17:17:59 <rhallisey> nihilifer, cool :)
17:18:16 <vhosakot> nihilifer: cool, and all the best!
17:18:21 <sdake> nihilifer you are a pleasure to work with
17:18:26 <sdake> but lets not send him off that easy ;)
17:18:29 <rhallisey> ansible: ansible guy I know was interested in bringing in our work around the docker module into 2.x
17:18:43 <sdake> rhallisey interesting
17:18:44 <sdake> expand plz
17:18:54 <nihilifer> thanks guys :)
17:19:11 <rhallisey> I spoke to him and summit. He's someone that use to work in openstack
17:19:25 <rhallisey> I can start a thread with him
17:19:34 <rhallisey> gauge his interest
17:19:34 <sdake> does this ansible person have a name?
17:19:45 <rhallisey> Ryan.. forget his last name
17:19:51 <mandre> Brady
17:19:53 <rhallisey> yes
17:20:03 <sdake> yup I hired him
17:20:05 <sdake> know him well :)
17:20:21 <rhallisey> ya well he was interested in it
17:20:44 <rhallisey> so we could work with him if we want to move back to anisble maintaining the module
17:21:01 <rhallisey> well they do already, but fixing the issues we need
17:21:13 <sdake> i am not super hot on that idea
17:21:30 <sdake> fool me once, shame on you fool me twice shame on me
17:21:33 <sdake> or something ;)
17:21:57 <inc0> well tbh there is small chance for us to be this screwed
17:22:15 <rhallisey> well I figured I would just throw it out there
17:22:16 <inc0> as worst case scenerio we can freeze docker version on 1.10 and ansible on whatever is working
17:22:23 <sdake> rhallisey we got super screwed over by docker and ansible when nobody would fix the mess made
17:22:24 <inc0> we have features we need already
17:22:52 <sdake> and it is alot of work to port back over to the ansible modules
17:22:52 <sdake> with little  gain
17:22:54 <sdake> and alot of pain
17:22:56 <rhallisey> sdake, we tried to fix the issues and no one listened, but now we do
17:23:01 <rhallisey> have some one that will listen
17:23:14 <sdake> i get there is an environmental change
17:23:15 <sdake> my wife harasses me about  it daily
17:23:16 <rhallisey> idk just a thought
17:23:31 <inc0> worth to consider imho
17:23:37 <sdake> but the docker module is too important to us to be left to others
17:23:50 <inc0> but well, sdake is true, we'd need to recreate most of tasks
17:23:55 <sdake> i really dont like pinning
17:23:59 <rhallisey> sdake, I mean ansible gains when they make kolla happy
17:24:08 <Jeffrey4l> i do not think it is a good idea back to use the ansible docker
17:24:15 <sdake> i am fully satisified with ansible
17:24:34 <sdake> i am not satisified with one cat maintaining modules on which our entire community depends
17:24:45 <sdake> what if he gets hit by an airbus?
17:24:57 <rhallisey> not saying it would him, but rather the community may care
17:24:57 <Jeffrey4l> we add much feature on kolla_docker.py which ansible guys will not accept it, i think.
17:25:00 <rhallisey> who knows
17:25:18 <inc0> getting hit by airbus is awfully specific way to dier
17:25:22 <rhallisey> s/would him/would be him
17:25:33 <inc0> boeings are equally dangerous
17:25:50 <vhosakot> if we use ansible's docker module, will it replace kolla_docker.py that uses the docker python module.  kolla_docker.py is great
17:26:11 <sdake> there is zero wrong with kolla_docker.py
17:26:15 <sdake> it does exactly what we need
17:26:18 <sdake> and we know how to maintain it
17:26:19 <vhosakot> yep.. it is awesome
17:26:28 <sdake> it is easy to extend
17:26:43 <sdake> and it does precisely the job we require and nothing more
17:27:00 <vhosakot> yes, agreed
17:27:26 <sdake> lsiten I am as hard core open source as anyone out there
17:27:32 <vhosakot> was kolla_docker.py written because ansible's docker module is bad ?
17:27:34 <sdake> in this case, pragmatism wins
17:27:41 <inc0> vhosakot, correct
17:27:43 <sdake> vhosakot broken and unmaintained
17:27:48 <sdake> completely broken
17:27:51 <vhosakot> ah ok..
17:28:00 <inc0> this bit backport was caused by this
17:28:09 <inc0> ultimately
17:28:20 <sdake> ya the charlie foxtrot of the liberty rebranching was caused by ansible not maintaing the docker module properly
17:28:32 <vhosakot> ah I see
17:28:51 <Jeffrey4l> kolla_docker.py also has a `common_option` param, which reduce the ansible code lines of kolla.
17:29:14 <sdake> ok 1 minute ;)
17:29:25 <sdake> thanks for coming folks
17:29:30 <sdake> lets get cracking on newton
17:29:43 <sdake> we have already done alot of work in n1 prior to summit
17:29:43 <sdake> time to get back to work :)
17:29:45 <sdake> #endmeeting