22:00:26 <adrian_otto> #startmeeting containers
22:00:27 <openstack> Meeting started Tue Sep 22 22:00:26 2015 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
22:00:28 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
22:00:30 <openstack> The meeting name has been set to 'containers'
22:00:32 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-09-22_2200_UTC Our Agneda
22:00:41 <adrian_otto> #topic Roll Call
22:00:43 <mfalatic> o/
22:00:44 <adrian_otto> Adrian Otto
22:00:51 <daneyon_> o/
22:00:51 <sew1> Steven Wilson
22:00:52 <apmelton> o/
22:00:52 <Tango> Ton Ngo
22:00:53 <muralia> o/
22:00:54 <juggler> Perry Rivera o/
22:00:54 <eghobo> o/
22:00:55 <wanghua> Wanghua
22:00:59 <rods> o/
22:01:05 <hongbin> o/
22:01:13 <bradjones> o/
22:01:39 <adrian_otto> hello mfalatic, daneyon_, Tango, muralia, juggler, eghobo, wanghua, rods, hongbin, bradjones
22:01:45 <Drago> o/
22:01:51 <juggler> hello!
22:01:53 <muralia> hi
22:01:55 <adrian_otto> hello Drago
22:02:05 <vilobhmm11> o/
22:02:10 <Drago> :) hi all
22:02:11 <daneyon_> hello
22:03:02 <adrian_otto> #topic Announcements
22:03:49 <wznoinsk> o/
22:04:03 <adrian_otto> due to a compounded sequence of events, there was no candidate submitted for Magnum PTL prior to the deadline for candidacy submission.
22:04:34 <adrian_otto> Both Hongbin and I submitted entries after the deadline, so the TC has decided to hold a separate election to decide who the PTL will be going forard.
22:04:59 <adrian_otto> the electorate is Magnum contributors.
22:05:16 <adrian_otto> when the dates for that election is known, we will relay them to you
22:05:19 <adrian_otto> any questions?
22:05:36 <juggler> none here
22:05:49 <daneyon_> nope
22:05:51 <adrian_otto> any other announcements from team members?
22:06:12 <adrian_otto> hello wznoinsk
22:06:34 <adrian_otto> #topic Container Networking Subteam Update  (daneyon_)
22:06:41 <adrian_otto> #link http://eavesdrop.openstack.org/meetings/container_networking/2015 Previous Meetings
22:06:45 <daneyon_> thx
22:06:55 <thomasem> \o
22:07:00 <daneyon_> we had our weekly meeting as usual last Thurs
22:07:01 <suro-patz> o/
22:07:12 <daneyon_> We reviewed the kuryr design spec
22:07:22 <daneyon_> #link https://review.openstack.org/#/c/213490/
22:07:34 <daneyon_> the spec has +1's from the magnum community
22:07:50 <daneyon_> now it's a metter of getting the CR's to vote and merge
22:08:14 <daneyon_> I created a bp for the kuryr/magnum integration work
22:08:16 <daneyon_> #link https://blueprints.launchpad.net/kuryr/+spec/containers-in-instances
22:08:41 <daneyon_> the bp will be the reference point for both communities
22:08:54 <daneyon_> last but not least....
22:09:10 <daneyon_> I have the magnum container networking model working for swarm bays
22:09:18 <daneyon_> as of 1 hour ago
22:09:45 <apmelton> awesome news!
22:09:46 <juggler> yeah!
22:09:52 <hongbin_> daneyon_: use flannel?
22:10:15 <daneyon_> i can instantiate a swarm bay with multiple nodes, pass the network-driver, labels to modify flannel config attributes. I can deploy multiple containers using swarm across multiple nodes and containers can communicate with one aother over the flannel overlay network
22:10:21 <daneyon_> yes, using flennel
22:10:40 <adrian_otto> fennel ;-) mmmmm, yummy
22:10:47 <daneyon_> as part of this effort, I changed swarm discovery to use etcd instead of public discovery.
22:11:09 <wanghua> ectd need discovery, too
22:11:12 <daneyon_> the key issue, we can not support multiple disocvery processes in a single bay
22:11:55 <daneyon_> since flannel is required (it's our only support network-driver), swarm needs to implement etcd
22:12:08 <daneyon_> lol
22:12:31 <daneyon_> wanghua correct, etcd still uses the public discovery mechanism.
22:12:55 <daneyon_> we just can;t have swarm and etcd both using public discovery.
22:12:59 <eghobo> daneyon_: but we have one etcd at master
22:13:13 <eghobo> we have zookeper for mesos
22:13:34 <hongbin_> eghobo: I don't think zookeeper is good for service discovery
22:13:34 <wanghua> We can let swarm use the etcd in bay
22:14:03 <daneyon_> it's possible to have multiple discovery processes in a bay, it would require larger code changes though
22:14:07 <eghobo> hongbin_: zookepeer was just example
22:14:20 <hongbin_> eghobo: k
22:14:21 <adrian_otto> ok, lets wrap up the Network subteam update if we can
22:14:37 <daneyon_> eghobo swarm did not use etcd
22:14:55 <daneyon_> hongbin_ i have yet to look at the mesos stuff
22:15:24 <hongbin_> daneyon_: let me know if you need help for mesos bay
22:15:40 <daneyon_> if we need to support multiple public discovery mechanisms in a bay, then we will need to make bigger changes to how the public_url is implemented.
22:15:58 <daneyon_> If i have a bay with zk and etcd, they each need their own public_url for public discovery
22:15:59 <wanghua> daneyon_:Swarm can use etcd in the bay to discover like in k8s
22:16:57 <daneyon_> wanghua that's what i implemented. swarm uses etcd and etcd uses the public-url for it's public discovery mechanism.
22:17:21 <wanghua> daneyon_: swarm support etcd already.
22:17:23 <daneyon_> here is the current work
22:17:25 <daneyon_> #link https://review.openstack.org/#/c/224367/
22:17:38 <daneyon_> it's funcational, so give it a spin
22:18:17 <adrian_otto> daneyon: Thanks for all the awesome work. This is super exciting to see!! Does this conclude your update?
22:18:21 <daneyon_> wanghua correct. However, magnum was not using etcd to back swarm, it was using swarm's public discovery
22:18:38 <daneyon_> adrian_otto yes
22:18:46 <daneyon_> any other questions we can take off line
22:18:47 <wanghua> daneyon_: We can make a little adjustment
22:18:49 <adrian_otto> thanks daneyon_
22:18:53 <daneyon_> yw
22:18:54 <adrian_otto> #topic Magnum UI Subteam Update
22:19:04 <adrian_otto> bradjones: any update to share?
22:19:09 <bradjones> sure
22:19:30 <bradjones> not a huge update (was on vacation last week)
22:19:45 <bradjones> we managed to merge some of the API work
22:19:55 <bradjones> big thanks to those that have been active reviewing
22:20:18 <bradjones> going to rebase my WIP patches that have the views for Bay and BayModel tomorrow
22:20:27 <bradjones> onto that code
22:20:42 <bradjones> then hopefully will be on track to merge those views in by the end of the week
22:20:59 <adrian_otto> sweet! Any questions for Brad?
22:21:33 <adrian_otto> thanks bradjones
22:21:36 <adrian_otto> #topic Review Action Items
22:21:44 <adrian_otto> adrian_otto to follow up with project-config cores to ask if we can arrange to merge https://review.openstack.org/216933
22:21:49 <adrian_otto> Status: COMPLETE
22:21:52 <adrian_otto> the code is merged
22:22:10 <adrian_otto> ^^ 1)
22:22:16 <adrian_otto> 2) Tango and manjeets_ to work together to complete https://blueprints.launchpad.net/magnum/+spec/external-lb
22:22:26 <adrian_otto> I have seen a series of advancements on that BP
22:22:40 <adrian_otto> is this action complete?
22:22:47 <Tango> The latest patch works end to end
22:22:53 <adrian_otto> WHOOT
22:22:59 <Tango> There is the issue about handling the password
22:23:35 <Tango> after some discussion and feedback from the team, I decided to take the manual approach to avoid any potential security issue
22:23:46 <wanghua> Handling the password is a common issue in several bps
22:23:53 <Tango> later we can investigate and evaluate the several approaches
22:24:08 <Tango> very helpful suggestions from the team
22:24:15 <eghobo> Tango: I think DNS is issue
22:24:28 <eghobo> not everyone run Designate
22:24:51 <Tango> ok, I can follow up with you to better  understand
22:25:01 <adrian_otto> ok, let's revisit this in just a moment when we get to this in BP/Bug Review
22:25:37 <adrian_otto> Tango: should I carry this action item forward, or should we consider it done?
22:25:47 <Tango> I think it's done
22:25:48 <adrian_otto> I think it's done, but want to be sure you see it that way
22:25:51 <adrian_otto> ok
22:25:56 <adrian_otto> Status: COMPLETE
22:25:59 <adrian_otto> #topic Blueprint/Bug Review
22:26:07 <adrian_otto> Essential Blueprint Updates
22:26:14 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/objects-from-bay Obtain the objects from the bay endpoint (vilobhmm)
22:26:22 <adrian_otto> vilobhmm11: update on this?
22:26:30 <vilobhmm11> working on making changes since v1 patch merged yest
22:26:36 <vilobhmm11> so some test are breaking
22:26:39 <vilobhmm11> fixing them
22:26:49 <vilobhmm11> adrian_otto : ^^
22:27:17 <adrian_otto> ok. IS there any help you need from the team to keep moving on this?
22:27:44 <vilobhmm11> need help esp with the unit tests
22:27:55 <vilobhmm11> our unit test framework is closely tied to DB
22:28:01 <vilobhmm11> with objects from bay
22:28:09 <vilobhmm11> we are removing this tight coupleing
22:28:30 <vilobhmm11> mostly help with unit test would be handy
22:28:38 <adrian_otto> right, so many unit tests will need to be refactored, right?
22:28:51 <vilobhmm11> adrian_otto : yes
22:29:17 <vilobhmm11> almost every unit test in read/write path for service/rc/pod needs to be modified
22:29:30 <adrian_otto> ok, if we have a way to identify all the tests that need to be refactored, and make somme sort of a list that references them we can point contributors to that and ask for help adjusting them all
22:29:41 <adrian_otto> it might be nice if we could show an example of how to fix them
22:29:59 <vilobhmm11> adrian_otto : will do so..have an etherpad link up by tommorow
22:30:11 <adrian_otto> and how to do them as patches that depend on the refactor review
22:30:30 <adrian_otto> is there one particular commit that needs to merge before all the unit tests are adjusted?
22:30:37 <adrian_otto> or does this need to be done all as one big patch?
22:30:57 <vilobhmm11> I have separtaed out patches for each object seperately
22:31:07 <vilobhmm11> seperate patch for rc/service/pod
22:31:37 <adrian_otto> ok, any questions on this from the team?
22:31:56 <vilobhmm11> adrian_otto : does the above info answer your question ?
22:32:04 <adrian_otto> yes, thanks
22:32:10 <vilobhmm11> ok np
22:32:30 <vilobhmm11> thats it from objects-from-bay
22:32:44 <adrian_otto> thanks vilobhmm11
22:32:44 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes Secure the client/server communication between ReST client and ReST server (madhuri)
22:32:54 <adrian_otto> madhuri is not present
22:33:10 <adrian_otto> most of this has been merged.
22:33:25 <adrian_otto> apmelton: any comments on this one?
22:33:38 <apmelton> not for k8s specifically
22:33:45 <adrian_otto> ok
22:33:48 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/external-lb Support the kubernetes service external-load-balancer feature (Tango)
22:34:05 <Tango> One good news is that the Kubernetes-V1 patch merged
22:34:13 <Tango> that makes things a lot easier
22:34:39 <adrian_otto> yay!
22:34:42 <Tango> As mentioned earlier, the patch for LB should be good now, but it needs review
22:34:54 <juggler> excellent Tango
22:35:00 <juggler> (and team)
22:35:12 <Tango> I am writing a guide to explain how to use the feature
22:35:25 <adrian_otto> want to post links to work needing review?
22:35:39 <wanghua> Tango, is the feature optional?
22:35:41 <Tango> https://review.openstack.org/#/c/191878/
22:35:55 <adrian_otto> maybe these?
22:36:06 <adrian_otto> #link https://review.openstack.org/#/q/status:open+project:openstack/magnum+branch:master+topic:bp/external-lb,n,z external-lb reviews
22:36:13 <Tango> wanghua: Yes with the manual step for editing the password, the feature is disabled by default
22:36:49 <Tango> the user needs to log into the master node to enter the password in a config file, then restart kube-apiserver and kube-controller-manager
22:37:04 <Tango> So you have the option of not using the feature
22:37:47 <adrian_otto> user = administrator
22:38:04 <Tango> Right
22:38:14 <Tango> I am adding a functional test using nginx
22:38:41 <hongbin_> Tango: adrian_otto Don't think so, anyone who create the bay can be a user, not necessary an admn
22:38:47 <adrian_otto> ok, sounds like some automation there might be a nice future enhancement. We should file bugs or BP's for that when these are finished.
22:39:22 <adrian_otto> hongbin_: right. I can't remember the official name of the persona who owns the bay.
22:39:36 <adrian_otto> I suppose "cloud user" is right then
22:39:52 <hongbin_> k
22:39:55 <Tango> Right, we need to evaluate best support to implement the additional automation to handle the password
22:40:06 <Tango> there are several approaches
22:40:13 <wanghua> As handling user credentials in the bays is a common issue in several bps, we need to decide a way
22:40:25 <adrian_otto> yes
22:40:43 <wanghua> Can we use trust?
22:40:58 <apmelton> if we can wanghua, I'd much prefer it
22:41:12 <adrian_otto> we also have this concern for the Docker Distribution (registry-v2) component
22:41:26 <Tango> I think trust requires token?
22:41:32 <adrian_otto> Tango: rather than a password, did you mean an auth token?
22:41:33 <apmelton> I seem to remember seeing a 'trust-id' config param for Docker Distribution
22:41:53 <wanghua> auth token will expire
22:41:56 <adrian_otto> because if you support an auth token, then it can take a trust token too.
22:42:02 <Tango> adrian_otto: That would be the ideal solution, but will require change in kus
22:42:05 <Tango> k8s
22:42:05 <adrian_otto> and the trust token can be scoped to have a far future expiry
22:42:57 <adrian_otto> I see. First things first. Let's make it work, and then iterate on it to make it better.
22:42:57 <wanghua> If we don't give expire time when we create trust, then the trust will never expire.
22:43:19 <adrian_otto> wanghua: exactly
22:43:36 <Tango> if there is a common set of requirement for several BPs, we should pull together and see if this can be solved in a common way.
22:43:48 <adrian_otto> this might also fold into the instance user discussion that's ongoing in nova currently
22:44:42 <wanghua> adrian_otto,  any reference?
22:44:51 <wanghua> for nova discussion
22:44:53 <adrian_otto> #link https://review.openstack.org/#/c/222293/1/specs/instance-users.rst,cm Instance Users Spec
22:45:26 <Tango> More homework to be done
22:45:43 <adrian_otto> #link https://review.openstack.org/222293 Instance Users for Cloud Interaction
22:46:09 <adrian_otto> ok, thanks Tango. Lots of good progress this past week, thanks!
22:46:24 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/secure-docker Secure client/server communication using TLS (apmelton)
22:46:49 <apmelton> I've got a review up on this
22:46:52 <apmelton> with some good input
22:46:53 <adrian_otto> https://review.openstack.org/212598
22:47:06 <apmelton> #link https://review.openstack.org/#/c/212598/
22:47:14 <adrian_otto> #link https://review.openstack.org/212598 Add TLS to Docker-Swarm Template (WIP)
22:47:30 <adrian_otto> what's remaining to promote this from WIP to normal status?
22:47:54 <apmelton> I need to add an 'insecure' flag so tls can be disabled
22:48:22 <adrian_otto> can we do that as a follow-up review that is WIP that depends on this one?
22:48:53 <adrian_otto> or do you see that as a minor addition (not making any assumptions here)
22:48:58 <apmelton> actually, if this patch lands, and you can't disable TLS the container conductor will be broken
22:49:03 <adrian_otto> ok
22:49:19 <adrian_otto> yes, I remember that now rom the midcycle… catching up on the comment stream
22:49:29 <adrian_otto> ok
22:49:40 <apmelton> my next step after adding the insecure flag is going to be working through what's necessarily to get the conductors talking TLS
22:50:22 <apmelton> I'll be breaking that suffiently so it can be shared between conductors for k8s/swarm/mesos/etc
22:50:42 <adrian_otto> Sweet! Any more before we advance to Open Discussion?
22:50:51 <apmelton> not from me
22:50:54 <adrian_otto> #topic Open Discussion
22:50:57 <adrian_otto> thanks apmelton
22:51:17 <juggler> any status highlights for the Japan Summit?
22:51:40 <adrian_otto> you mean in term of highlighting new features we added in this release?
22:52:53 <juggler> sure and general status of prep work towards the the summit, etc.
22:52:56 <devkulkarni> I have a question
22:53:04 <devkulkarni> about container-create cli command
22:53:11 <suro-patz> Would request core-reviewers to merge https://review.openstack.org/#/c/226491/ - this will fix functional test failure and help others proceed
22:53:33 <adrian_otto> juggler: that's something we will pick up as soon as the PTL election concludes
22:54:10 <juggler> adrian_otto: excellent
22:54:15 <wanghua> Does magnum support ironic now?
22:54:34 <adrian_otto> yes, there are heat templates for ironic
22:54:44 <adrian_otto> but ironic does not completely support neutron
22:55:08 <wanghua> Then it can not work?
22:55:14 <adrian_otto> so depending on how your network is laid out, you may have some tweaking to do in order to make it work right with your network setup
22:55:28 <adrian_otto> it's not universally compatible with all network drivers.
22:55:30 <devkulkarni> is there plan to add glance integration for container-create command? i.e. provide ability to access/use containers which are in glance as part of container-create
22:56:07 <adrian_otto> devkulkarni: not that I'm aware of. I'd suggest using Heat for that.
22:56:25 <wanghua> Why we need to do this
22:56:34 <adrian_otto> Heat in combination with this: https://review.openstack.org/193174
22:56:36 <wanghua> I think you can do it by nova-docker
22:57:00 <adrian_otto> wanghua: nobody is really maintaining nova-docker
22:57:48 <devkulkarni> adrian_otto: but heat won't do the scheduling of containers on the bay nodes, right? so I was thinking of using the container-create within solum to run application containers on the bay nodes.
22:58:13 <adrian_otto> devkulkarni: if you are using the code I referenced, heat uses a Magnum Bay
22:58:15 <devkulkarni> solum has ability to store application containers in glance and swift
22:58:23 <adrian_otto> and the Bay includes the scheduling capability
22:58:57 <wanghua> I think you can use swarm to do it
22:59:06 <wanghua> and put the image in docker registry
22:59:07 <devkulkarni> adrian_otto: ok, will take a closer look at it. I haven't yet started on the integration yet
22:59:16 <adrian_otto> for that integration, you'd want to place the images in the private repo we put in swift with Docker Distribution
22:59:20 <adrian_otto> then pull from that
22:59:34 <adrian_otto> we are coming to the scheduled end of our meeting time now
22:59:37 <devkulkarni> adrian_otto: yes, that is one option
23:00:00 <devkulkarni> wanghua: yes, that is an option
23:00:08 <adrian_otto> Our next team meeting is scheduled for Tuesday 2015-09-29 at 1600 UTC.
23:00:23 <adrian_otto> see you then. Thanks everyone!
23:00:26 <adrian_otto> #endmeeting