16:01:08 <adrian_otto> #startmeeting containers
16:01:09 <openstack> Meeting started Tue Oct 13 16:01:08 2015 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:14 <openstack> The meeting name has been set to 'containers'
16:01:16 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-10-13_1600_UTC Our Agenda
16:01:21 <adrian_otto> #topic Roll Call
16:01:22 <mfalatic> o/
16:01:24 <adrian_otto> Adrian Otto
16:01:24 <daneyon_> o/
16:01:27 <Drago> o/
16:01:27 <wanghua> o/
16:01:27 <muralia1> o/
16:01:30 <jay-lau-513_> o/
16:01:32 <rlrossit> o/
16:01:34 <rpothier> o/
16:01:37 <hongbin> o/
16:01:43 <eliqiao> o/
16:01:46 <Madhuri> O/
16:01:47 <fawadkhaliq> o/
16:01:48 <bradjones__> o/
16:01:55 <dimtruck> o/
16:01:56 <apmelton> o/
16:02:19 <dane_leblanc> o/
16:02:30 <eghobo> o/
16:02:32 <Tango> Ton Ngo
16:04:23 <tcammann> hello
16:05:07 <wznoinsk> o/
16:05:14 <adrian_otto1> whoops, I had a local network glitch, sorry about that.
16:05:36 <adrian_otto1> #topic Announcements
16:05:39 <adrian_otto1> 1) Our PTL Election is complete. Based on the results, I will continue as your PTL for the Mitaka release.
16:05:54 <daneyon_> Congratulations adrian_otto1
16:05:57 <muralia1> cool. congrats adrian
16:05:59 <Madhuri> Congratulations
16:06:02 <Tango> Congrats!
16:06:02 <jay-lau-513_> adrian_otto1 congrats!
16:06:02 <bradjones__> congrats
16:06:05 <eliqiao> con!
16:06:06 <dimtruck> congrats!
16:06:06 <hongbin> congrats
16:06:11 <wanghua> con!
16:06:12 <mfalatic> congratulations!
16:06:18 <adrian_otto1> I am proud to be part of such a terrific team, thank you all.
16:06:25 <adrian_otto1> 2) Release is cutting tonight, for liberty/stable, and all open work will need to be resubmitted against this branch.
16:06:39 <adrian_otto1> we have a bunch of straggling bits that we need to land
16:06:59 <suro-patz> * joining late *
16:07:26 <vilobhmm11> hi all
16:07:26 <adrian_otto1> so my guidance here is that if you don't have a significant objection to merging the work relating to our essential blueprints, that we merge it with tech debt filed against it
16:07:40 <daneyon_> adrian_otto1 do you have the etherpad that we used to track what needs to land for L?
16:07:52 <adrian_otto> yes, one moment
16:08:05 <vilobhmm11> https://etherpad.openstack.org/p/magnum-liberty-release-todo is this the one
16:08:15 <daneyon_> adrian_otto1 It would be nice to review the ep and cross off/update to make sure we're not missing anything.
16:08:28 <adrian_otto> vilobhmm11: yes, that's it, thanks
16:08:49 <vilobhmm11> adrian_otto : np
16:08:50 <adrian_otto> so if we canmanage to merge all of that by tonight, then great
16:08:59 <daneyon_> #link https://etherpad.openstack.org/p/magnum-liberty-release-todo
16:09:09 <daneyon_> adrian_otto1 i found it ^
16:09:09 <adrian_otto> if not, then we will need to abandon those reviews and resubmit them against the new branch
16:09:20 <daneyon_> nevermind
16:09:21 <adrian_otto> or release without them
16:09:52 <adrian_otto> continuing with announcments:
16:10:05 <diga> o/
16:10:11 <wanghua> how about the features which are not complete
16:10:14 <adrian_otto> 3) We will not have a meeting the Tuesday after next because of the OpenStack Summit in Tokyo
16:10:27 <adrian_otto> I have updated the meeting schedule accordingly.
16:10:49 <adrian_otto> wanghua: we need to evaluate them individually
16:11:12 <eliqiao> the meeting time will be changed after summit ?
16:11:26 <adrian_otto> based on what I have seen in review it's probably smarter to merge what we have up for review rather than releasing without them
16:11:39 <diga> adrian_otto: Congrats for becoming PTL once again for Mitaka :)
16:11:48 <adrian_otto> and then revisit concerns as a follow-up pursuit
16:11:54 <adrian_otto> tx diga
16:12:01 <Madhuri> Agree
16:12:09 <vilobhmm11> +1
16:12:53 <adrian_otto> the good news is that our list is only half as long as last week, but we need to draw the line now.
16:13:15 <adrian_otto> I am still willing to run demos for the Magnum session using code from master
16:13:28 <adrian_otto> but the code in the release needs to work
16:13:48 <adrian_otto> and I am also willing to continually cut revisions as we add meaningful features
16:14:01 <adrian_otto> I am willing to cut a release every day if that makes sense
16:14:20 <adrian_otto> the OpenStack release process for us is really not that hard
16:14:35 <adrian_otto> ok, any more announcements from team members?
16:14:42 <suro-patz> adrian_otto: daily release would be useful, if we have some validation commitment
16:15:10 <adrian_otto> I will be raising the functional testing topic just before open discussion
16:15:19 <adrian_otto> sorry I forgot to place that on the agenda wiki page
16:15:40 <adrian_otto> #topic Container Networking Subteam Update (daneyon_)
16:15:56 <adrian_otto> #link http://eavesdrop.openstack.org/meetings/container_networking/2015 Previous Meetings
16:16:03 <daneyon_> thanks
16:16:11 <daneyon_> we had our usual meeting last week
16:16:24 <daneyon_> there are a few action items from the meeting that I would like to address
16:16:38 <daneyon_> 1) ACTION: danehans to address how to add new drivers with adrian_otto
16:17:10 <daneyon_> Does anyone have an opinion how we should support add'l network drivers?
16:17:53 <daneyon_> Until the heat templates get refactored, it will be difficult to have the drivers out of tree.
16:18:24 <diga> daneyon_: I think better we can use kuryr API's internally as we have VIF support now
16:18:27 <daneyon_> For the time being, drivers will be different heat template fragments or add confditional logic to existing fragments.
16:18:57 <adrian_otto> humm, sounds a bit messy
16:18:59 <daneyon_> but what about drivers that do not fall under Kuryr
16:19:27 <hongbin> Ideally, each driver should be mapped to a heat resource
16:19:28 <daneyon_> This could be a lengthy discussion that we will need to adress at the design summit
16:19:38 <eghobo> diga: we cannot use kuryr, because it's very coe not Docker specific
16:19:46 <adrian_otto> daneyon_: ok, let's table this
16:19:49 <juggler> o/ [some IRC client technical difficulties..]
16:19:52 <diga> okay
16:19:57 <daneyon_> 2) ACTION: danehans to follow-up with adrian_otto regarding summit schedule and details.
16:20:03 <adrian_otto> I have an action item for closing out the topics for Tokyo
16:20:15 <daneyon_> I am trying to coordinate with gsagie from the kuryr team.
16:20:20 <daneyon_> OK
16:20:25 <adrian_otto> I have a tool I can use to update the titles and abstracts in the program
16:20:43 <daneyon_> Otherwise, the swarm patch is complete: https://review.openstack.org/#/c/224367/
16:20:52 <adrian_otto> so I'll be making the selections based on the topics wiki we referenced last week
16:20:56 <adrian_otto> WHOOT
16:21:17 <daneyon_> I know it's a big one. I could chip away seom of the code, but aligning the swarm templates with the k8s templates make the patch look bigger than what it is
16:21:25 <juggler> excellent daneyon!
16:21:26 <adrian_otto> that was around 2000 lines of change
16:21:44 <daneyon_> This is b/c the TL yml (swarm.yaml) has the master resource pulled out into master.yaml.
16:21:47 <adrian_otto> next time, let's try to break that work up a bit more
16:21:57 <adrian_otto> so it will be easier to review and merge
16:22:05 <daneyon_> Again, I am trying to make the swarm templates look as much like the k8s temapltes as possible.
16:22:18 <adrian_otto> yes, that was the bulk of the change set
16:22:19 <daneyon_> adrian_otto will do
16:22:28 <daneyon_> dane_leblanc is testing the patch
16:22:40 <daneyon_> I beliebe apmelton will too.
16:22:47 <adrian_otto> but I urge reviewers not to -1 that particular patch on that basis
16:23:06 <adrian_otto> but that we offer our contributors guidance as the work comes in
16:23:07 <daneyon_> Be a big help if the core's can do a review when time permits
16:23:20 <adrian_otto> will do, daneyon_
16:23:23 <vilobhmm11> will have a look
16:23:27 <daneyon_> I know we have big fish to fry to get L out the door, so I'm not sweating it.
16:23:32 <daneyon_> Thanks all.
16:23:40 <daneyon_> That's it from me unless their are questions.
16:24:10 <adrian_otto> we can take questions in open discussion
16:24:13 <eliqiao> would like to see test result after every change of template since we don't have functional testing yet.
16:24:19 <adrian_otto> thanks daneyon_
16:24:29 <adrian_otto> #topic Magnum UI Subteam Update (bradjones__)
16:24:34 <bradjones__> hey
16:24:47 <bradjones__> so main update this week is a big refactor of the bay model table ui
16:24:53 <bradjones__> #link https://review.openstack.org/#/c/212039/
16:25:00 <bradjones__> really need that patch to land asap
16:25:31 <bradjones__> I have managed to rope Rob Cresswell who works on horizon to help out with some new blueprints
16:25:51 <bradjones__> He is going to be working on the UI for Containers
16:26:09 <bradjones__> so taking the work I have done for bay models and bays and moving it for that resource
16:26:34 <bradjones__> I don't think there is anything up for review yet but he talked me through what is there and it looks good so far
16:26:44 <bradjones__> so hopefully we can get that in before tokyo too
16:27:03 <adrian_otto> Chris Hoge from the OpenStack Foundation asked about this. There is an opportunity to showcase this as part of the Liberty release marketing.
16:27:14 <adrian_otto> but what's there looks pretty thin
16:27:41 <daneyon_> awesome
16:27:45 <adrian_otto> but we don't have any more time really
16:28:16 <bradjones__> adrian_otto: once the review I mentioned previously goes in, in addition to the create view that will be up shortly there is actually a usable UI
16:28:26 <adrian_otto> bradjones__: I wanted to get a sense from you how much functionality is still up for review that is close to landing
16:28:39 <adrian_otto> I voted on the one you mentioned
16:28:53 <bradjones__> adrian_otto: ah yes I see thanks
16:29:13 <adrian_otto> ok, so what can we do to help you fast-track the create view?
16:29:46 <bradjones__> I will push it up for review in the next hour or so, then if we can just get it merged as quick as possible
16:29:58 <bradjones__> once that is done if a few people would actually run it
16:30:08 <bradjones__> and test the workflow seems good that would be really useful feedback
16:30:36 <adrian_otto> ok, thanks bradjones__
16:30:44 <adrian_otto> any more on this topic before we advance ?
16:30:54 <bradjones__> I think that's all for now thanks
16:31:00 <adrian_otto> thanks bradjones__
16:31:04 <adrian_otto> #topic Review Action Items
16:31:13 <adrian_otto> 1) adrian_otto to check into finalizing our summit discussion topic schedule, and release it for addition to the main schedule
16:31:18 <adrian_otto> Status: in progress
16:31:23 <adrian_otto> #action adrian_otto to check into finalizing our summit discussion topic schedule, and release it for addition to the main schedule
16:31:38 <adrian_otto> that concludes action items from last week
16:31:41 <adrian_otto> #topic Blueprint/Bug Review
16:32:02 <adrian_otto> Essential Blueprint Updates
16:32:07 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/objects-from-bay Obtain the objects from the bay endpoint (vilobhmm11)
16:32:19 <adrian_otto> three reviews are still up for this.
16:32:22 <vilobhmm11> https://review.openstack.org/#/q/status:open+project:openstack/magnum+branch:master+topic:bp/objects-from-bay,n,z we have +1 for pod/rc patches…
16:32:28 <vilobhmm11> from Jenkinds
16:32:32 <vilobhmm11> Jenkins
16:32:45 <vilobhmm11> jay lau, hongbin and adrian_otto thanks for the review
16:33:10 <adrian_otto> there are remaining comments from hongbin on https://review.openstack.org/223367
16:33:21 <adrian_otto> that were not solved in the most recent patchset
16:33:23 <vilobhmm11> hongbin has nit comments on these patches to change a variable name to another
16:33:33 <adrian_otto> should be a simple fix for those
16:33:47 <adrian_otto> he just asked for a few variables to be renamed
16:33:49 <vilobhmm11> yes adrian_otto after the meeting will upload the varibale name change
16:33:54 <adrian_otto> ok, thanks
16:33:54 <vilobhmm11> yes you are right
16:34:06 <hongbin> thx
16:34:08 <vilobhmm11> so need reviews with these patches
16:34:23 <adrian_otto> ok, after we have those merged, we can mark this BP as Implemented, correct?
16:34:33 <vilobhmm11> yes adrian_otto
16:34:49 <adrian_otto> excellent! Let's do it today.
16:34:53 <vilobhmm11> ok
16:34:56 <vilobhmm11> thanks!
16:35:02 <vilobhmm11> thats it from my side
16:35:23 <adrian_otto> thanks vilobhmm11
16:35:26 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes Secure the client/server communication between ReST client and ReST server (madhuri)
16:36:04 <Madhuri> Actual functionality patches have been merged
16:36:13 <Madhuri> Just guide is remaining
16:36:26 <Madhuri> That's needs a revision
16:36:40 <adrian_otto> ok, great
16:37:00 <adrian_otto> will we be able to get the guide done before we branch tonight?
16:37:07 <hongbin> Madhuri: I like to see the guide landed in L release, if you can make it
16:37:33 <Madhuri> actually I don't have magnum env to test that guide
16:37:45 <hongbin> Madhuri: I can test it
16:37:46 <adrian_otto> aah, we can help with that
16:37:49 <Madhuri> Can we merge with tech-debt
16:38:04 <Madhuri> Will be a great help
16:38:04 <adrian_otto> I can give you a fresh environemnt off of master if you want to run it through manual tests
16:38:47 <Madhuri> Adrian I will not be able to use your env, no internet to do that now
16:38:56 <adrian_otto> oh, ok
16:38:58 <Madhuri> I am currently online on phone
16:39:03 <adrian_otto> yikes!
16:39:14 <Madhuri> Hongbin can you do that?
16:39:15 <juggler> yikes indeed!
16:39:20 <hongbin> yes
16:39:28 <adrian_otto> okay, can I have a volunteer to run through the doc to verify it and record any gaps as bugs against it?
16:39:37 <Madhuri> Thanks hongbin
16:39:43 <adrian_otto> thanks hongbin
16:39:45 <hongbin> np
16:39:47 <adrian_otto> <3
16:40:03 <Madhuri> I think that will complete our bp
16:40:03 <eghobo> adrian_otto: i did yesterday and it works
16:40:12 <adrian_otto> oh, that's terrific!!
16:40:25 <Madhuri> Few improvements is needed but that can be done later I guess
16:40:29 <adrian_otto> thanks eghobo
16:40:38 <juggler> +1 eghobo
16:40:45 <adrian_otto> yes, let's merge ant iterate on it
16:40:48 <adrian_otto> *and
16:40:49 <Madhuri> Can we merge it, if eghobo have tested it?
16:40:58 <adrian_otto> yes.
16:40:59 <Madhuri> +1
16:41:04 <wanghua> +1
16:41:12 <adrian_otto> and we can continue to scrutinize the release branch
16:41:20 <Madhuri> Yes sure
16:41:44 <juggler> thank you Madhuri
16:42:05 <Madhuri> That's all
16:42:11 <Madhuri> Thanks
16:42:35 <adrian_otto> great, thanks Madhuri
16:42:42 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/external-lb Support the kubernetes service external-load-balancer feature (Tango)
16:42:46 <adrian_otto> Implemented, right?
16:42:58 <Madhuri> Yes 😊
16:42:59 <Tango> All the current patches merged, thanks everyone for the reviews
16:43:11 <vilobhmm11> Tango : +1
16:43:21 <adrian_otto> I'd like to de-scpe the func test
16:43:25 <Tango> I opened 2 tech debt bugs for user credential and functional test
16:43:37 <adrian_otto> scope that into another child blueprint
16:43:50 <Tango> One minor tweak to the doc, will try to get that in
16:43:51 <adrian_otto> ok, or bugs are ok
16:44:09 <Tango> Either way is OK
16:44:14 <adrian_otto> drop the #8 work item, and link the bugs to the BP
16:44:31 <adrian_otto> then we can address as follow ups
16:44:46 <Tango> ok, sounds good.  Are you creating the child BP? or should I do that?
16:45:02 <adrian_otto> yes, please take that. I am here if you need any help.
16:45:10 <Tango> ok, I will do that
16:45:12 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/secure-docker Secure client/server communication using TLS (apmelton)
16:45:21 <adrian_otto> I think this is Implemented too
16:45:27 <apmelton> adrian_otto: yea, that was done last week
16:45:40 <adrian_otto> sweet, that's it...
16:45:43 <adrian_otto> next topic
16:45:55 <adrian_otto> #topic Functional Testing Strategy
16:46:07 <adrian_otto> maybe less of a strategy and more of a tactical plan
16:46:16 <adrian_otto> we need a way for more testing to happen in parallel
16:46:33 <adrian_otto> because we simply can't wait hours for all of our functional tests to run
16:46:51 <adrian_otto> so we wither need the tests to be faster, or for more to happen in parallel… my idea…
16:46:57 <hongbin> One thing is that we can move the magnumclient test to the python-magnumclient project
16:47:01 <eliqiao> we can splite functional testing per coe
16:47:17 <adrian_otto> use the 3rd party CI feature to set up a farm of machines that all do a grouping of functional tests… per COE
16:47:22 <adrian_otto> eliqiao: yes!
16:47:38 <rlrossit> adrian_otto: what does the 3rd party-ness get you though?
16:47:49 <adrian_otto> this is something we can probably use the OSIC cluster(s) for
16:48:09 <eghobo> adrian_otto: something like tempest is better for functional testing
16:48:11 <adrian_otto> rlrossit: they get kicked off all at the same time
16:48:29 <adrian_otto> we can still use tempest for execution of those tests
16:48:30 <rlrossit> they are anyways aren't they? or at least they're added to the queue of jobs that need to be run
16:48:49 <dimtruck> rlrossit: right now the concurrency in our gates is set to 1
16:48:52 <adrian_otto> our current func tests appear to be serlialized
16:49:01 <adrian_otto> each time we add one, our runtime gets longer
16:49:15 <adrian_otto> dimtruck: How do we fix that?
16:49:26 <rlrossit> has anyone looked at http://docs.openstack.org/developer/tempest/plugin.html ?
16:49:43 <dimtruck> we can remove it but then it's an added problem of having multiple bay creates at the same time in our gates
16:50:00 <dimtruck> and from what i've gathered that would make things even slower
16:50:00 <rlrossit> I'm thinking we need a different job for each coe/os
16:50:17 <rlrossit> granted it will make more queue jobs happen, but the queue is handled by zuul in parallel
16:50:21 <adrian_otto> dimtruck: We should deal with that gracefully, and if we don't then it's a bug
16:50:34 <tcammann> +1 for split by COE
16:50:45 <adrian_otto> we will discuss this more in Tokyo
16:50:50 <adrian_otto> in a workroom session
16:50:55 <dimtruck> has anyone successfully been able to run a number of bay CRUDs (swarm or k8s) for a longish period of time?
16:51:00 <adrian_otto> but I want something to get us through the next two weeks
16:51:00 <eliqiao> +1 , will in.
16:51:05 <dimtruck> adrian_otto: makes sense
16:51:13 <adrian_otto> because we are hitting the upper limit of the 2 hour runtime limit
16:51:19 <adrian_otto> what are our options?
16:51:33 <tcammann> it's taking 1 hour on most runs
16:51:36 <adrian_otto> tcammann: nod
16:51:36 <eghobo> dmitryme: I did
16:51:45 <rlrossit> adrian_otto: I can take a look at it and put something in the ML about it
16:51:52 <eghobo> but I am using new version of atomic image
16:52:03 <adrian_otto> tcammann: we have new tests in review that double that runtime
16:52:08 <tcammann> oh I see
16:52:12 <eliqiao> someone please help to check this https://review.openstack.org/#/c/232421/
16:52:14 <adrian_otto> so I am thinking of -2 that work
16:52:22 <adrian_otto> which pains me so
16:52:23 <tcammann> -2
16:52:36 <adrian_otto> I hate to block tests from merging, but I can't break the gate
16:52:44 <tcammann> completely agree
16:53:10 <tcammann> We have lived without so far
16:53:25 <rlrossit> well if it keeps failing jenkins we don't have to worry about it merging
16:53:28 <adrian_otto> ok, so unless a better solution is proposed, that's what I will do
16:53:45 <adrian_otto> rlrossit: well, yes… but sometimes you might land on a really fast node
16:53:55 <rlrossit> oh this actually is a race
16:53:57 <rlrossit> my bad
16:54:00 <adrian_otto> yes
16:54:03 <rlrossit> I thought it was an always failing thing
16:54:09 <adrian_otto> it failed once
16:54:13 <adrian_otto> I don't know about always
16:54:15 <dimtruck> rlrossit: re: tempest plugin - that's the next step we should take
16:54:37 <adrian_otto> ok, I'm going to advance to Open Discussion
16:54:43 <juggler> offhand is there someone we know in a similar sized or larger project that has implemented our intended solution successfully?
16:54:46 <adrian_otto> we can keep brainstorming on this topic as well
16:54:47 <eliqiao> +1 for tempest
16:54:57 <adrian_otto> juggler: nova and cinder
16:55:01 <adrian_otto> and neutron, I think
16:55:09 <adrian_otto> #topic Open Discussion
16:55:15 <adrian_otto> for driver testing
16:55:26 <adrian_otto> COE testing is analogous to driver testing
16:55:27 <juggler> ah
16:55:38 <eghobo> adrian_otto: one topic to discuss before cut
16:55:49 <adrian_otto> eghobo: yes?
16:56:13 <eghobo> we need to move to new atomic image which Tango build recently
16:56:39 <eliqiao> }
16:56:41 <eliqiao> {
16:56:41 <Tango> This item is on the todo list.
16:56:41 <adrian_otto> yes
16:56:47 <eghobo> this image works for kub and swarm
16:56:53 <adrian_otto> yes, we need to get that onto tarballs.rackspace.com
16:56:55 <eliqiao> +1
16:57:03 <adrian_otto> so we can reference it for download as an image
16:57:13 <adrian_otto> that will allow us to use it in gate tests
16:57:27 <Tango> So that's different from the fedorapeople site?
16:57:42 <adrian_otto> yes, but it's just an optimization
16:57:56 <eliqiao> adrian_otto:  do we have wiki on how to using new image on gate?
16:58:08 <adrian_otto> the tarballs site is more local to the machines that run CI
16:58:32 <Tango> ok, who will copy the images there?
16:58:34 <adrian_otto> eliqiao: I am not sure , but our friends in #openstack-infra are always very helpful on that topic
16:58:41 <eliqiao> I don't find any script on CI to pull that image.
16:59:00 <eghobo> adrian_otto: i think it will take time, why we cannot use current model?
16:59:07 <adrian_otto> coming to the end of our time now
16:59:24 <eghobo> new image is in fedora public and anyone can use it
16:59:36 <eliqiao> please upgrade CI to use new image
16:59:36 <adrian_otto> we will ahve one more team meeting before the summit, on Tuesday 2015-10-20 at 1600 UTC
17:00:00 <adrian_otto> we can continue in #openstack-containers
17:00:10 <adrian_otto> thanks for attending everyone. I'm super pumped!!
17:00:16 <adrian_otto> #endmeeting