16:01:56 <adrian_otto> #startmeeting containers
16:01:57 <openstack> Meeting started Tue Aug 18 16:01:56 2015 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:58 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:02:00 <openstack> The meeting name has been set to 'containers'
16:02:05 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-08-18_1600_UTC Our Agneda
16:02:12 <adrian_otto> #topic Roll Call
16:02:14 <apmelton> o/
16:02:15 <daneyon_> o/
16:02:16 <adrian_otto> Adrian Otto
16:02:16 <mfalatic> o/
16:02:22 <Tango> Ton Ngo
16:02:25 <madhuri> o/
16:02:29 <tcammann> \o_
16:02:30 <rbradfor> Ronald Bradford
16:02:41 <eghobo_> o/
16:02:56 <adrian_otto> hello apmelton, daneyon_, mfalatic, Tango, madhuri, tcammann, rbradfor, and eghobo_
16:03:06 <dane_leblanc_> o/
16:03:20 <hongbin> o/
16:04:00 <adrian_otto> hello dane_leblanc_, and hongbin
16:04:03 <bradjones> o/
16:04:12 <adrian_otto> hi bradjones
16:04:38 <sdake_> o/
16:05:04 <diga> o/
16:05:56 <adrian_otto> hello sdake and diga
16:05:58 <adrian_otto> let's begin
16:06:03 <adrian_otto> #topic Announcements
16:06:12 <adrian_otto> 1) adrian_otto will be out on 2015-08-25 due to travel to OpenStack Silicon Valley event. sdake will chair.
16:06:40 <adrian_otto> any other announcements form team members?
16:06:46 <adrian_otto> *from?
16:07:16 <adrian_otto> #topic Container Networking Subteam Update (daneyon_)
16:07:25 <daneyon_> Last week's network subteam meeting had a ton of discussion around kuryr. What it is, what it's not, etc..
16:07:37 <daneyon_> Unfortunately, we did not have anyone from the kuryr team in attendance. I received confirmation that someone from the kuryr team will join this week's meeting.
16:07:57 <daneyon_> I attended the kuryr weekly meeting yesterday.
16:08:14 <daneyon_> topics included config mgt and the details of vif binding/unbinding
16:08:17 <adrian_otto> #link http://eavesdrop.openstack.org/meetings/container_networking/2015/container_networking.2015-08-13-18.00.html Previous Meeting
16:08:24 <daneyon_> and a WIP kuryr design spec
16:08:40 <daneyon_> If you have time, please review the spec
16:08:48 <adrian_otto> link to the spec?
16:08:49 <daneyon_> #link https://review.openstack.org/#/c/213490/
16:08:55 <adrian_otto> thanks daneyon_
16:09:04 <daneyon_> Speaking of specs....
16:09:16 <daneyon_> It would be great to wrap up the magnum network spec
16:09:34 <daneyon_> Please let me know if you have any questions on my magnum net spec.
16:09:52 <daneyon_> I have my magnum dev env up and going
16:10:04 <daneyon_> I ran into a coupole bugs related to flannel
16:10:12 <sdake_> the network spec danyeon_
16:10:13 <daneyon_> i tracked the bugs and committed fixes
16:10:14 <sdake_> was tom's issue sorted out
16:10:19 <daneyon_> thanks everyone for getting them merged
16:10:30 <sdake_> I am +2 on teh spec with whatever changefs are needed to get it through the system
16:10:38 <daneyon_> when the magnum net spec is merged, i will create bp's for the individual tasks
16:10:57 <daneyon_> in the meantime, I'm starting to hack at some of the changes proposed in the spec.
16:11:24 <daneyon_> sdake_ yes, tcammann is happy and +2'd the revised spec
16:11:29 <sdake_> adrain_otto one thing that I afound helped with rollcall votes
16:11:34 <sdake_> is to set a final deadline for approval
16:11:46 <sdake_> daneyon_ nice ill go vote on it again then and re-reiew
16:11:48 <daneyon_> i think adrian_otto would like to do one last review
16:12:08 <daneyon_> hopefully he is happy with the spec and it gets merged
16:12:09 <adrian_otto> my apologies for my delayed action
16:12:23 <sdake_> daneyon_ we are drinking from two firehoses atm
16:12:36 <daneyon_> adrian_otto no worries... i know you are busy
16:12:46 <adrian_otto> daneyon, a family emergency has surfaced for me, and I may need to step away from work to attend to it.
16:12:56 <daneyon_> sdake_ i hope you're thirsty... lol
16:13:14 <sdake_> more like hungry
16:13:19 <sdake_> plate overflowing!
16:13:26 <daneyon_> adrian_otto unless their are any questions, that's all i have for the network subteam update
16:13:42 <adrian_otto> ok, let's get through our agenda, and I'll regroup with you on taht subject
16:13:44 <daneyon_> adrian_otto no worries. family 1st.
16:13:51 <daneyon_> adrian_otto i hope all is well
16:15:44 <adrian_otto> 1 sec
16:16:31 <adrian_otto> #topic Magnum UI Subteam Update (bradjones)
16:16:49 <bradjones> Not a huge amount to update on this week
16:17:04 <bradjones> I have pushed a WIP patch that is the view for BayModel
16:17:26 <bradjones> currently working on getting the API working then once it is synced up I will be pestering you all for reviews :)
16:17:43 <bradjones> getting reviews from horizon folks without a magnum environment will be tricky
16:18:04 <bradjones> so hoping you guys will be able to help review in some capacity to check the workflow is as expected
16:18:16 <sdake_> bradjones note I dont think we can review the patches
16:18:30 <adrian_otto> sdake_: why?
16:18:48 <sdake_> adrian_otto I thought we had a separeate group that didnt have magnum in it in gerrit
16:18:51 <sdake_> but couldbe wrong
16:19:06 <adrian_otto> you can review anything in Gerrit
16:19:14 <sdake_> oh i mean +2 review
16:19:16 <bradjones> sdake_: it doesn't have to be a +2 review but just a +1 from a few people with magnum envs to test against will be great
16:19:20 <sdake_> yes of course I could review :)
16:19:25 <Tango> I can work with Thai Tran here and provide him with a magnum environment to review
16:19:38 <bradjones> Tango: That would be great thanks
16:19:39 <apmelton> bradjones: I should be able to throw it in my magnum environment and play around with horizon
16:20:01 <apmelton> might need a little help getting it installed, but I'd be glad to test it
16:20:09 <bradjones> I will post message in #openstack-containers when it is in a state to have a proper review
16:20:15 <adrian_otto> and I'm reasonably sure that magnum core as a group belongs to the magnum-ui-core group
16:20:23 <adrian_otto> thanks bradjones
16:20:33 <adrian_otto> #topic Review Action Items
16:20:36 <adrian_otto> (none)
16:20:43 <adrian_otto> #topic Blueprint/Bug Review
16:20:53 <adrian_otto> Essential Blueprint Updates
16:21:24 <adrian_otto> I am going to skip the first on the agenda b/c we downgraded it
16:21:34 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes Secure the client/server communication between ReST client and ReST server (madhuri)
16:22:22 <madhuri> After midcycle meetup, we have lot to do
16:22:59 <madhuri> hopefully few more patches are left
16:23:00 <adrian_otto> I notices a flurry of activity in the code review queue
16:23:08 <adrian_otto> *noticed
16:23:30 <apmelton> madhuri: is there anything with the core feature I can help with?
16:24:03 <madhuri> apmelton: Thanks, can we fix a time to discuss
16:24:31 <adrian_otto> thanks for your help apmelton
16:24:54 <adrian_otto> madhuri: anything you'd like to identify for team discussion today?
16:25:00 <madhuri> adrian_otto: I apologize for the delay, I just shifted to India, so some work
16:25:12 <madhuri> But I am back on the work
16:25:18 <adrian_otto> welcome back!
16:25:30 <apmelton> congrats on the new job!
16:25:57 <madhuri> Thanks apmelton :)
16:26:18 <adrian_otto> should I advance to the next work item, madhuri?
16:26:29 <madhuri> Sure
16:26:36 <apmelton> actually, there's something I'd like to bring up here
16:27:05 <apmelton> the effort to pull the lbaas cert manager into castella has stalled
16:27:15 <apmelton> are we still planning to use their code?
16:27:23 <apmelton> the lbaas cert manager code
16:27:42 <rm_work> o/
16:27:47 <rm_work> ah, looks like good timing
16:28:06 <apmelton> rm_work: was the one working on the effort
16:28:11 <madhuri> we are using it for now
16:28:49 <suro-patz> o/  : "entering late"
16:29:07 <apmelton> so, what I'm wondering is, if both Magnum and Octavia is planning to support that code, should we attempt pulling it into an oslo library?
16:29:17 <madhuri> apmelton: What's your concern?
16:29:51 <madhuri> Yes that can be an improvement.
16:30:25 <apmelton> I guess my concern is maintaining that code in two trees
16:30:28 <madhuri> That's a different task and we can discuss seperately
16:30:34 <apmelton> alright
16:30:43 <rm_work> yes, I would love to see it somewhere common
16:30:55 <madhuri> +1
16:30:57 <rm_work> there is some refactoring that needs to happen anyway, so i would support an effort to get it into a common project
16:31:21 <sdake_> maintaining which code?
16:31:35 <sdake_> sorry irc algged out
16:31:37 <apmelton> sdake_: the lbaas cert manager code we're planning to use in our cert manager
16:31:44 <sdake_> thanks
16:31:53 <rm_work> LBaaS abandoned its effort to get the code in Castellan, because we realized the CertManager interface wasn't going to work out for us anyway for technical reasons, but the CertGenerator code (which i think you actually care more about?) is still going to be used
16:32:06 <rm_work> it was just an even harder sell for Castellan so we hadn't tried yet
16:32:09 <apmelton> rm_work: yes, sorry, CertGenerator
16:33:03 <apmelton> I guess what we can do is solidify our use case in our tree, then pull out the common pieces once we've got it working
16:33:25 <rm_work> Yeah, I will be happy to help with that when you are working on it
16:33:33 <rm_work> whoever takes that task can drop me a line
16:33:35 <adrian_otto> ok, so let's do this one step at a time.
16:33:57 <adrian_otto> let's get the solution merges in Magnum first, and then decide how best to share it among projects
16:34:05 <apmelton> sounds good to me
16:34:09 <rm_work> yep
16:34:15 <madhuri> +1
16:34:29 <sdake_> +2
16:34:43 <vilobhmm> Update objects from the bay
16:34:46 <adrian_otto> cool, I'll make a note to put this on our topic list for Tokyo if we don't get to it before then.
16:35:04 <adrian_otto> vilobhmm: proceed
16:35:10 <vilobhmm> sure :)
16:35:21 <vilobhmm> Last week took ownership of it. We got the design and approach sorted out, thanks to sdake for providing help wherever needed. Looks like there is no Query-by-UUID k8s ReST API https://github.com/kubernetes/kubernetes/issues/4817 in Kubernates. Whereas in each of our resources (pod/rc/service) we try to instantiate the object using https://github.com/openstack/magnum/blob/master/magnum/api/controllers/v1/utils.py#L70 but if we p
16:35:46 <adrian_otto> truncated?
16:36:00 <vilobhmm> We got the design and approach sorted out, thanks to sdake for providing help wherever needed. Looks like there is no Query-by-UUID k8s ReST API https://github.com/kubernetes/kubernetes/issues/4817 in Kubernates.
16:36:18 <vilobhmm> Whereas in each of our resources (pod/rc/service) we try to instantiate the object using https://github.com/openstack/magnum/blob/master/magnum/api/controllers/v1/utils.py#L70  we use QUERY-by-UUID
16:36:34 <vilobhmm> but if we plan to create the objects for these resources using the ReST Bay/k8s endpoints we only have the option to get it from name as only QUERY-by-NAME is supported as of now
16:36:45 <vilobhmm> So my plan is to irc://irc.freenode.net:6667/#1. get the uuid -> name conversion as part of QUERY-by-UUID, will get this information from magnum db irc://irc.freenode.net:6667/#. Once we get the name in irc://irc.freenode.net:6667/#1. do a QUERY-by-NAME to fetch details from k8s ReST API endpoints. Want to know your thoughts on this.
16:37:06 <vilobhmm> So till the time kubernates provide a QUERY-by-UUID we will need a way to store the mapping from uuid to name and at present the best place to keep it IMHO happens to be the table for the respective resource for example magnum.rc, magnum.service etc…
16:37:16 <vilobhmm> addrian_otto : I hope it is not truncated now
16:37:44 <sdake> so just to clarify, when getting a list
16:37:54 <sdake> the process is for pod in list_from_kys:
16:38:14 <sdake> pod = k8s_query_by_name
16:38:19 <vilobhmm> ok
16:38:24 <sdake> pod_uuid = k8s_query_by_uuid?
16:38:25 <suro-patz> I think it is better to have an object representation at magnum-db, which will provide the translation/mapping to the rc/service/pod object of k8s
16:39:00 <sdake> suro-patz I'd rather avoid that, because if someone creates an objectt with the native client, the mapping wont be present in the database
16:39:28 <suro-patz> sdake: point noted
16:39:29 <hongbin> So, we are not able to: magnum pod-show <pod_uuid> ?
16:39:30 <madhuri> Agree
16:39:36 <vilobhmm> sdake : but right now there is no way to have a uuid-name mapping
16:39:51 <sdake> so what precisely is the proposal on uuids
16:39:58 <sdake> without cut and paste ;)
16:40:01 <suro-patz> sdake: but a name is unique identifier given a bay
16:40:06 <madhuri> I think this should read from kuberentes not magnum db
16:40:16 <vilobhmm> madhuri : +!
16:40:18 <vilobhmm> +1
16:40:37 <sdake> suro-patz yes I htink we can use the bay uuid to help generate a uuid from the pod id
16:40:44 <sdake> pod name rather
16:40:46 <vilobhmm> thats what the blueprint is for IMHO…just that right now only QUERY-by-NAME k8s Rest api is available
16:40:48 <hongbin> How about store static information in DB, and read from kubernetes for dynamic state
16:41:19 <adrian_otto> +1
16:41:24 <sdake> nothing about pods can be in the database, because it wont be updated when a native client is ued to write to k8s
16:41:31 <madhuri> hongbin: +1
16:41:31 <adrian_otto> that's the direction we aimed at when we discussed this last as a team
16:42:01 <sdake> essentially the database will be missing information on native client write operations
16:42:06 <adrian_otto> sdake, things like the name of the pod and it's identifier can be in the db
16:42:25 <apmelton> adrian_otto: not if it was created by the native client
16:42:27 <mfalatic> +1
16:42:30 <sdake> kube-ctl create pod.yaml
16:42:32 <vilobhmm> adrain_otto : +1 for now till the time https://github.com/kubernetes/kubernetes/issues/4817  gets resolved
16:42:39 <adrian_otto> we can use a synchronization approach to make sure the static state remains mirrored
16:42:53 <sdake> adrian_otto that will create a pod that kubernetes doesn't know about
16:43:00 <adrian_otto> apmelton: yes, this is the same as heat convergence
16:43:00 <sdake> rather that magnum doen't know about
16:43:15 <sdake> yes i thought of this sync approach adrian_otto
16:43:23 <adrian_otto> it won't know about it initially, but it can learn about it soon after it is created
16:43:26 <sdake> it seems better imo just to always get from the k8s api
16:43:44 <adrian_otto> I see
16:43:46 <apmelton> I thought the entire point of using CoE objects directly was to get us out of the sync and lock game
16:43:51 <sdake> because a sync has to do the same thing in essence
16:43:55 <sdake> get it from the k8s endpoint
16:44:22 <adrian_otto> so in that case we don't persist pod information at all. I could live with that.
16:44:25 <sdake> apmelton no it is to get is to be able to use native cleints, what you mention is a side benefit ;)
16:44:34 <sdake> or rc/service info
16:44:44 <sdake> adrian_otto that is what we are after here ;)
16:45:12 <madhuri> Yes better to get from k8s endpoint directly
16:45:14 <sdake> vilobhmm if you have more test patches in your stream, please put them up for review
16:45:15 <sdake> even if they are wip
16:45:31 <sdake> i'd like to see the whole stream befor merging
16:45:42 <vilobhmm> magnum api's : just keep mapping uuid-name in db just the mapping..whereas all the "fresh" data is accessed and updated using k8s ReST endpoints…for native client : get directly from k8s endpoints
16:45:45 <sdake> so we aren't in a state of some stuff comes from db some stuff comes from k8s endpoint
16:45:48 <vilobhmm> sdake  : sure will do
16:45:54 <Tango> What's the outlook for kubernetes to provide this support?
16:45:56 <suro-patz> if the objective is to make magnum client behave as per the native client, magnum api should behave as pass through
16:46:01 <sdake> make sure to title it WIP:
16:46:10 <vilobhmm> sdake : yup
16:46:24 <adrian_otto> ok, vilobhmm ready to advance topics now?
16:46:25 <sdake> tango I dont understand the q
16:46:50 <Tango> sdake: the issue that vilobhmm pointed out
16:47:06 <vilobhmm> adrian_otto : sure..just a last question to you and team does this seems fair "just keep mapping uuid-name in db just the mapping..whereas all the "fresh" data is accessed and updated using k8s ReST endpoints…for native client : get directly from k8s endpoints"
16:47:24 <vilobhmm> for magnum api - just keep...
16:47:51 <vilobhmm> so that i can submit more WIP patches based of this
16:48:45 <adrian_otto> vilobhmm: let's start with a naive implementation that fetches all pod and service and rc related state directly from k8s rather than persisting duplicates of state data in our own db
16:49:07 <vilobhmm> adrian_otto : seems fair
16:49:11 <vilobhmm> we can move ahead
16:49:12 <adrian_otto> if that proves to be a problem, then we can optimize from there
16:49:15 <vilobhmm> thanks all!
16:49:22 <madhuri> adrian_otto: +1, no data in magnum db
16:49:23 <vilobhmm> alrite
16:49:30 <adrian_otto> cool, thanks!
16:49:33 <vilobhmm> madhuri : +1 thanks
16:49:39 <adrian_otto> link https://blueprints.launchpad.net/magnum/+spec/external-lb Support the kubernetes service external-load-balancer feature (Tango)
16:49:42 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/external-lb Support the kubernetes service external-load-balancer feature (Tango)
16:50:09 <Tango> We got Angus Lees (gus) to engage directly, so that's good news
16:50:42 <Tango> He pointed out 3 new patches that we need.  They are not merged yet, so I built a customer Kubernetes version based on 1.0.3
16:50:45 <adrian_otto> I also saw references to k8s patches
16:51:18 <Tango> I am testing it now.   With gus help, I am hopeful
16:51:22 <adrian_otto> we could "bird dog" stalk those patches with our k8s friends
16:52:00 <Tango> Angus thinks there may be a few more bugs to fix
16:52:03 <adrian_otto> at a minimum simply express our interest in those by updating them in the k8s project bug tracker
16:52:12 <Tango> Yep
16:52:57 <madhuri> Just for update I generated 1.0.3 kubernetes client code and it is not working
16:52:59 <adrian_otto> ok, Tango once we have a working setup, and we know what patches we want to land upstream, please let me know so I can apply what influence we have there so they get the attention they deserve
16:53:05 <Tango> I am also working on a patch to set the parameters for the heat templates, will upload the initial version shortly, then will likely need feedback from the team
16:53:41 <Tango> And adding a functional test based on wordpress, although it won't run until we have V1 api support
16:54:16 <madhuri> Tango: I generated v1 client code only
16:54:35 <adrian_otto> ok, should we pull in team members to assist with that?
16:54:40 <Tango> madhuri: Can I pick them up?
16:55:14 <madhuri> I am busy with TLS, if any one wants to take it. it will be good
16:55:27 <Tango> Yes it would be good to have help moving to V1 api
16:55:28 <madhuri> Sure Tango
16:55:35 <hongbin> I can help if you want
16:55:45 <adrian_otto> awesome hongbin!
16:55:47 <Tango> hongbin: Thanks hongbin, will ping you
16:55:52 <hongbin> k
16:55:54 <madhuri> Anyone please :)
16:56:29 <madhuri> Tango: Please let me know if I can help anyway
16:56:35 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/secure-docker Secure client/server communication using TLS (apmelton)
16:56:41 <adrian_otto> we have code for this now
16:56:52 <apmelton> got a WIP review up, I"m updating it as reviews show up and patches land
16:56:54 <manjeet_> Tango: I can help in setting parameters for heat templates
16:57:02 <apmelton> won't really be finished until the core TLS stuff lands
16:57:15 <adrian_otto> thanks apmelton
16:57:16 <adrian_otto> #topic Open Discussion
16:57:37 <adrian_otto> I am scheduled to appear in a keynote panel at OpenStack Silicon Valley
16:57:39 <hongbin> I have a question: what is the deadline for landing liberty feature for Magnum?
16:57:54 <adrian_otto> there is a remote chance I may have family business that conflicts with it
16:58:03 <madhuri> +1 hongbin
16:58:09 <adrian_otto> so I may reach out to a few of you to act as a standby
16:58:47 <adrian_otto> the event is on August 26+27 in Mountain View, CA
17:00:02 <adrian_otto> hongbin: there is not an official deadline, but we need all new features ASAP
17:00:16 <adrian_otto> time elapsed
17:00:18 <daneyon_> i have to jump to my next meeting
17:00:40 <adrian_otto> our next meeting is 2015-08-25 at 2200 UTC (sdake charis)
17:00:43 <adrian_otto> #endmeeting