16:00:57 <adrian_otto> #startmeeting containers
16:00:58 <openstack> Meeting started Tue Jul  7 16:00:57 2015 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:59 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:01 <openstack> The meeting name has been set to 'containers'
16:01:05 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-07-07_1600_UTC Our Agenda
16:01:10 <adrian_otto> #topic Roll Call
16:01:14 <adrian_otto> Adrian Otto
16:01:29 <juggler> Perry Rivera
16:01:30 <apmelton> Andrew Melton
16:01:31 <rpothier> Rob Pothier
16:01:33 <Tango> Ton Ngo
16:01:33 <dane_leblanc> Dane LeBlanc
16:01:34 <sdake_> yar \o
16:01:39 <jjlehr> o/
16:01:44 <bradjones> o/
16:01:47 <daneyon> o/
16:01:52 <diga> Digambar Patil
16:01:57 <thomasem> Thomas Maddox
16:01:58 <hongbin> o/
16:02:03 <mfalatic> o/
16:02:41 <adrian_otto> hello juggler, apmelton, rpothier, Tangom dane_leblanc, sdake_, jjlehr, bradjones, daneyon, diga, thomasem, hongbin, and mfalatic
16:03:25 <adrian_otto> I was hoping to see madhuri today as well
16:03:50 <adrian_otto> #topic Announements
16:04:08 <jjfreric> jjfreric joining
16:04:10 <adrian_otto> 1) We will be releasing liberty-1 today
16:04:16 <adrian_otto> hello jjfreric
16:04:21 <tcammann> hello
16:04:28 <jjfreric> hello
16:04:40 <adrian_otto> 2) Our magnum-ui-core team has been established:
16:04:49 <adrian_otto> #link https://review.openstack.org/#/admin/groups/970,members Initial Members
16:04:56 <eghobo> o/
16:04:57 <adrian_otto> The group has the initial list of reviewers in it.
16:05:07 <adrian_otto> hello tcammann and eghobo
16:06:07 <adrian_otto> The magnum-core group is linked to the magnum-ui-core group, but there is no expectation that members of magnum-core will review all the code in that repo.
16:06:26 <adrian_otto> we are trusting the magnum-ui-core group to carry most of that responsibility.
16:06:32 <adrian_otto> Questions on this?
16:07:00 <sdake_> yay ui :)
16:07:07 <adrian_otto> 3) I am back form vacation, returning to full duty again this week
16:07:38 <adrian_otto> doing some catch-up.
16:07:45 <tcammann> wb
16:07:47 <adrian_otto> any other announcements form team members?
16:07:53 <adrian_otto> tcammann: tx
16:07:57 <sdake_> midcycle
16:08:18 <adrian_otto> yes, we have a poll for midcycle participation
16:08:18 <sdake_> idont now if this is an announcement or something to discuss during the meeting
16:08:19 <adrian_otto> anyone have that link handy?
16:08:21 * juggler is playing catchup as well
16:08:29 <tcammann> 00~http://doodle.com/pinkuc5hw688zhxw01~
16:08:29 <sdake_> adrian_otto sec i'll find link
16:08:53 <tcammann> uh mangled clipboard sorry
16:09:05 <adrian_otto> #link http://doodle.com/pinkuc5hw688zhxw Midcycle participation poll
16:09:10 <sdake_> ya tcommann on ball :)
16:09:31 <sdake_> tango is hosting at ibm facilities
16:09:40 <adrian_otto> thanks Tango and IBM!
16:09:50 <Tango> Glad to help
16:09:59 <apuimedo> sorry I'm late
16:10:10 <adrian_otto> hello apuimedo
16:10:33 <xu_> hi, all, Xu from Hyper
16:10:34 <adrian_otto> ok, so please respond to the poll so we can select a date and plan for that
16:10:43 <adrian_otto> hi xu_
16:11:07 <adrian_otto> daneyon: I meant to ask you for this in advance so you could prepare
16:11:14 <adrian_otto> sorry for springing it
16:11:19 <adrian_otto> #topic Container Networking Subteam Update
16:11:27 <daneyon> adrian_otto no worries
16:11:29 <adrian_otto> we held an initial meeting last Thu
16:11:41 <daneyon> nothing much new to report. I was on vacation last week and training this week.
16:11:50 <adrian_otto> #link http://eavesdrop.openstack.org/meetings/container_networking/2015/ Previous Meetings
16:12:09 <daneyon> Please review the previsou meeting and ping me on irc if you have any quesitons.
16:12:16 <daneyon> #link http://eavesdrop.openstack.org/meetings/container_networking/2015/
16:12:43 <adrian_otto> ok, so each week, I'd like to get a 1-2 minute update from the subteam here so the rest of the group can have an idea about what's happening there at a high level
16:12:44 <daneyon> It may be best not to rehash everything from last week's update
16:13:02 <adrian_otto> right, we have the transcript for those who want to dive in
16:13:15 <adrian_otto> #topic Review Action Items
16:13:19 <daneyon> adrian_otto that is what i'm planning for. I would normally have news, but with vacation and training not much.
16:13:29 <adrian_otto> 1) madhuri to kick off thread about cert provided by stacforge project anchor
16:13:36 <adrian_otto> Status?
16:13:44 <daneyon> anyone particiapting in the subteam, pls continue to add content to the EP
16:13:48 <juggler> adrian_otto should we respond to the survey if we can attend remotely?
16:13:50 <daneyon> #link https://etherpad.openstack.org/p/magnum-native-docker-network
16:14:12 <adrian_otto> juggler: yes, respond anyway, and add a comment indicating you plan to participate as a remote attendee
16:14:31 <adrian_otto> tx daneyon
16:14:43 <daneyon> yw adrian_otto
16:14:44 <juggler> adrian_otto ok
16:14:48 <adrian_otto> ok, I will carry this action item forward
16:15:03 <adrian_otto> #action madhuri to kick off thread about cert provided by stackforge project anchor
16:15:14 <adrian_otto> 2) madhuri to create blueprint on self-signed ca certs via magnum bay-creation
16:15:16 <adrian_otto> Status?
16:15:38 * adrian_otto scans for a blueprint
16:15:56 <hongbin> This one?
16:15:59 <hongbin> #link https://blueprints.launchpad.net/magnum/+spec/magnum-as-a-ca
16:16:33 <adrian_otto> Status: complete
16:16:44 <adrian_otto> should we plan to discuss that BP in the next section of the meeting?
16:17:17 <adrian_otto> and the last action item
16:17:18 <adrian_otto> 3) all core reviewers to review https://review.openstack.org/#/c/194905/
16:17:44 <adrian_otto> not many reviewers have commented on that one yet
16:18:30 <adrian_otto> I will carry this item forward
16:18:55 <adrian_otto> #action All core reviewers to review https://review.openstack.org/#/c/194905/
16:19:04 <sdake_> ya if we are to do specs everyone needs to review them
16:19:18 <sdake_> I have seen instances where people say "hey I never wathed that spec and now its implemented I dont like it"
16:19:21 <sdake_> that happens all the time
16:19:21 <adrian_otto> #action adrian_otto to begin an ML thread for https://review.openstack.org/194905
16:19:28 <sdake_> the whole point of a spec is to avoie that ;-)
16:19:38 <sdake_> this is why I dont like specs ;)
16:19:39 <adrian_otto> sdake_: agreed, thanks
16:20:08 <adrian_otto> let's not underestimate the value of a well considered consensus
16:20:24 <adrian_otto> of specs is how we get that, it should be worth some admin overhead to get there
16:20:25 <sdake_> agree it makes sense in certain cases, I use them on kolla occasionally
16:20:28 <sdake_> but its not the norm :)
16:20:57 <adrian_otto> for clarity, Magnum does not require specs yet, but there are encouraged for major features.
16:21:08 <adrian_otto> we are exploring the use of them now
16:21:25 <adrian_otto> ok, that brings us to our next topic
16:21:35 <adrian_otto> #topic Blueprint/Bug Review
16:21:42 <adrian_otto> New Blueprints for Discussion
16:21:50 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/hyperstack Power Magnum to run on metal with Hyper
16:22:18 <xu_> Let me know for questions or thoughts, thx
16:22:26 <hongbin> I would like to learn more about what is Hyper
16:22:44 <hongbin> And how well it fit into Magnum
16:22:46 <sdake_> so this is essentially anothe os_distro type correct?
16:22:56 <xu_> Hyper is a hypervisor-agnostic Docker runtime
16:22:58 <sdake_> hongbin I think its a good fit for what we are after
16:22:59 <adrian_otto> sdake_: yes, that's how it's proposed
16:23:30 <sdake_> xu_ will it run k8s?
16:23:43 <xu_> Hyper allows to run Docker images with any hypervisor, but without the need of a guest OS, like CoreOS, or CentOS
16:23:48 <sdake_> and swarm?
16:23:58 <adrian_otto> I've suggested that it be implemented for all supported bay types
16:24:05 <adrian_otto> so the answer should be yes
16:24:11 <sdake_> xu_ our abstraction point is a layer higher then docker
16:24:12 <apuimedo> yes, hyper is a very good fit
16:24:25 <xu_> Hyper is not a cluster system, it is single host engine, like Docker daemon
16:24:34 <apuimedo> and looking at the code, it would be easy to extend the Go to make it plug the neutron port types
16:24:46 <daneyon> xu_ just to clarify, CoreOS is not a guest OS but a host OS
16:25:00 <xu_> Hyper uses the native k8s podfile for atomic scheduling unit
16:25:01 <apuimedo> its tap management is clean and easy to grasp
16:25:25 <sdake_> I like the model of hyperv
16:25:46 <sdake_> but it woul need to somehow launch either warm or k8s to be useful in magnum imo
16:25:54 <sdake_> but if it did, that would be fantastic ;)
16:26:01 <adrian_otto> was hyperv == Microsoft HyperV, or a typo, sdake_?
16:26:02 <sdake_> warm/swarm
16:26:04 <xu_> this bp is to make magnum run on bare-metal, in parallel with nova, not a component on top of nova.
16:26:08 <daneyon> xu_ re: atomic scheduling so Hyper is more than just a single host daemon?
16:26:23 <sdake_> sorry just got up - feeling sick - whatever tech we are alking about :)
16:26:29 <adrian_otto> I think you meant Hyper, but just wanated to be sure
16:26:34 <sdake_> ya hyper
16:26:37 <adrian_otto> ok, tx
16:26:39 <apuimedo> daneyon: yes, it's single host
16:26:46 <sdake_> xu_ do you understand what i'm saying?
16:26:51 <sdake_> xu_ with the abstraction point a layer above
16:27:00 <sdake_> magnum abstracts swarm and kubernetes
16:27:03 <xu_> daneyon re: no, hyper advocates pod, everything is in a unit of pod
16:27:05 <sdake_> magnum doesn't really abstract docker
16:27:13 <apuimedo> you'd need magnum to do the orchestrating between hosts
16:27:14 <daneyon> I think stating that this BP allows magnum to run on BM could be confusing or misleading
16:27:25 <adrian_otto> so this approach makes sense where the compute form factor (nova instance type) is a VM and the use case is running containers with the various Magnum Bay Types
16:27:33 <apmelton> I think hyper could technically fit in, it would just be another coe
16:27:36 <daneyon> Ironic should be the entrypoint for Magnum on BM
16:28:12 <adrian_otto> apmelton: no, it's not a coe
16:28:20 <sdake_> why not write some code and see how it fits into the system xu_ ?
16:28:22 <adrian_otto> it needs to be combined with a coe
16:28:40 <adrian_otto> sdake_: let's take one step back
16:28:41 <sdake_> adrian_otto right this is what I keep getting at, but I don't think xu_ gets it :)
16:28:44 <apuimedo> adrian_otto: exactly
16:28:44 <daneyon> xu_ OK so pods are units of mgt, but hyper does not include multi-host scheduling, correct?
16:29:00 <adrian_otto> what I'd like to do is check with the team to see if there are objections to approving the direction of this BP
16:29:14 <xu_> sdake yes, i think the idea of abstraction makes sense
16:29:18 <apmelton> looking at this image: https://hyper.sh/img/hyper.png
16:29:20 <adrian_otto> and allowing the Hyper team to draft a proposal in the form of a spec or reviews
16:29:22 <apmelton> hyper looks a lot like a coe
16:29:23 <eghobo> xu_: hyper is container engine like CoreOS Rocket, isn't it?
16:29:38 <apmelton> it looks like a CoE + an OS distro
16:29:38 <xu_> hyper is more for some cloud providers who want to build a secure, public, native CaaS, instead of CaaS on top of IaaS.
16:29:40 <sdake_> no objections fro me - think tech looks solid - not sure how its actually going to fit into the system ;)
16:29:48 <apuimedo> xu_: correct me if I'm wrong, since I looked at it like 3 weeks ago
16:29:55 <xu_> eghobo: yes
16:29:58 <adrian_otto> and part of that should really be an FAQ that addresses the questions that folks like apmelton and eghobo are raising today.
16:30:05 <apuimedo> but wouldn't it be possible to just use swarm to orchestrate hyper hosts?
16:30:19 <adrian_otto> apuimedo: I think that's possible
16:30:20 <apuimedo> (hosts running the hyper daemon in place of the docker one)
16:30:43 <apuimedo> IIRC it should be. It seems the easiest way to integrate
16:30:44 <daneyon> xu_ so basicly instead of running containers in standard VM's to address container security, run your Docker images in hyper VM's that are lightweight?
16:30:53 <xu_> apuimedo: it is possible
16:31:02 <apuimedo> exactly daneyon ;-)
16:31:09 <adrian_otto> ok, I'm gong to move to an #agreed to approve the direction of https://blueprints.launchpad.net/magnum/+spec/hyperstack
16:31:16 <adrian_otto> anyone object to that?
16:31:33 <adrian_otto> or do you need more time to understand it first?
16:31:34 <xu_> daneyon: yes, and that will eliminate the guest OS, so kernel+docker image, totally immutable infra
16:31:46 <eghobo> adrian_otto: we need provide more details, before aproving
16:31:57 <adrian_otto> there are two stages of approval
16:32:06 <daneyon> xu_ I looked at hyper about 4 weeks ago and I like it. I have no practical experience yet, but I like the model
16:32:08 <adrian_otto> we have directional approval which asks for a proposal (spec)
16:32:17 <apuimedo> same here
16:32:29 <adrian_otto> and we have design approval which accepts the proposal, and expects code to follow
16:32:41 <hongbin> A question
16:32:46 <xu_> daneyon: thx, hyper is technically similar with Intel ClearContainer and MS HyperV container.
16:32:51 <hongbin> Hyper is going to be in tree or a seperated plugin?
16:32:51 <adrian_otto> I'm asking about directional approval, which I normally handle without asking first.
16:32:55 <daneyon> xu_ I agree with sdake_ write some code to demo basic functionality.
16:33:16 <adrian_otto> hongbin: no, this would be in-tree for Magnum
16:33:44 <xu_> daneyon: sure
16:33:51 <hongbin> adrian_otto: I think I need more time to learn that
16:33:53 <adrian_otto> ok, so daneyon and sdake appear to be in support of directional approval
16:33:59 <sdake_> realistically the only way xu_ is going to get something that wors reasonablly well with kubernete and swarm is to prototype it not spec it
16:33:59 <apuimedo> in-tree swarm flavor?
16:34:24 <eghobo> I am not sure why hyper should be magnum plugin, sounds like it should be kub plugin
16:34:30 <sdake_> but how we get from point a - z i'm not totally concerned about ;)
16:34:57 <adrian_otto> xu_: are you open to attempting a prototype first?
16:35:18 <daneyon> seems like hyper replaces the docker daemon, so not sure it fits with swarm
16:35:29 <daneyon> or any coe's
16:35:44 <daneyon> xu_ how does hyper do multi-host?
16:35:46 <sdae> it needs to integrate with swarm
16:35:57 <xu_> eghobo: hyper works better with neutron and cinder, which means that CaaS=magnum+neutron+cinder+hyper+BM
16:36:43 <daneyon> xu_ does it simply work better with neutron/cinder because containers are encapsulated in VM's?
16:36:57 <xu_> daneyon: the same way as Docker does multi-host, and probably even better, due to the mature SDN solutions available in hypervisor space
16:37:02 <sdae> how about we start a mailing list thread about the architecture so we don't eat up our entire meeting agenda :)
16:37:03 <adrian_otto> so we definitely have interest. I suggest this. I'll put this back on the agenda for next week. In the mean time, let's open an ML thread to discuss the related questions by email with the team
16:37:19 <apuimedo> good idea adrian_otto
16:37:23 <adrian_otto> let's get clarity on the value proposition
16:37:28 <adrian_otto> and then revisit this.
16:37:31 <xu_> daneyon: yes, the nature of hypervisor, strong isolation for multi-tenancy public cloud
16:37:35 <hongbin> wfm
16:37:44 <adrian_otto> you are welcome to write a prototype in the mean time and submit it as a review
16:37:48 <apuimedo> xu_: if you could make a diagram of the proposed architecture of the fit with hyper and other OpenStack components
16:37:53 <apuimedo> it would be great
16:37:55 <adrian_otto> but don't expect us to merge it without consensus on the blueprint.
16:37:58 <adrian_otto> make sense?
16:38:09 <daneyon> +1 on writing a minimal prototype
16:38:18 <eghobo> +1
16:38:20 <sdae> blueprint or prototype - whatever works
16:38:25 <sdae> I think xu_ would learn alot from a prototype
16:38:30 <sdae> or one from hyper
16:38:32 <sdae> whoever does the work
16:38:41 <xu_> adrian_otto: it is quite some changes to magnum, even for a prototype.
16:39:19 <adrian_otto> it's up to you if you work on that or not, but it may speed up a consenesus
16:39:28 <xu_> ok, will do
16:39:31 <adrian_otto> I'm happy to make the final call on this, but I want team input
16:39:39 <adrian_otto> so the ML thread is the first way we will get that
16:40:03 <adrian_otto> xu_: can you start that thread, or would you like my assistance with that?
16:40:46 <xu_> adrian_otto: i'm new to openstack, appreciate if you could show some lights!
16:41:05 <adrian_otto> #action adrian_otto to assist xu_ with starting an ML thread about the Hyper blueprint for Magnum
16:41:12 <adrian_otto> ok, next subtopic for Blueprints
16:41:13 <xu_> https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/1095x362/558bcbf7a1ab7aa4b4753b1232d3886f/IaaS_vs_CaaS.png overview arch
16:41:27 <adrian_otto> Essential Blueprint Updates
16:41:29 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/objects-from-bay Obtain the objects from the bay endpoint (sdake)
16:41:54 <sdake> yar
16:41:58 <adrian_otto> sdake: any updates on this?
16:42:02 <sdake> this is two milestones worth of work
16:42:10 <sdake> i was going to try to recruit tom to help ;)
16:42:16 <sdake> but haven't got there yet
16:42:19 <sdake> tom, interested in helping? :)
16:42:37 <sdake> tcommann^^
16:42:51 <apuimedo> tcammann: tcammann_ ?
16:43:00 <sdake> sory about mispelling
16:43:01 <tcammann> sure
16:43:07 <sdake> cool lets catch up off line
16:43:15 <adrian_otto> Cool, I'll revisit this one next week
16:43:18 <adrian_otto> next...
16:43:19 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes Secure the client/server communication between ReST client and ReST server (madhuri)
16:43:22 <sdake> i think with both me and tom, it will take both l2 and l3 adrian_otto
16:43:39 <adrian_otto> madhuri is not with us today, is there anyone working on this who can make an update?
16:43:49 <sdake> i have looked at the reviews
16:43:50 <adrian_otto> perhaps apmelton?
16:43:54 <sdake> they look solid
16:43:59 <sdake> not sure if they actually work or not
16:44:10 <sdake> last info I got from her seem to indicate it *does* work properly
16:44:14 <apmelton> adrian_otto: nope, I'll go through the reviews this week and check them out though
16:44:35 <adrian_otto> ok, I'llr evisit this one next week.
16:44:38 <adrian_otto> Next one...
16:44:47 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/external-lb Support the kubernetes service external-load-balancer feature (sdake)
16:44:57 <adrian_otto> Tango: were you working on this one?
16:45:16 <sdake> how did I get assigned to that?
16:45:21 <Tango> Yes, I am
16:45:31 <adrian_otto> maybe I read it wrong? I can reassign it
16:45:39 <adrian_otto> update on this, Tango?
16:46:05 <Tango> I am a bit stuck on getting the latest Kubernetes version to run on Fedora Atomic
16:46:08 <adrian_otto> oh, I misread the Assignee
16:46:13 <adrian_otto> sorry sdake!
16:46:26 <Tango> Asked for help on the Google ML, got a few pointers
16:46:28 <adrian_otto> you were the registrant, not the assignee
16:46:46 <Tango> Other people are also complaining about similar problem
16:46:54 <Tango> although on Red Hat
16:47:02 <adrian_otto> Tango: do we have what we need to proceed, or are we blocked on upstream work?
16:47:05 <apmelton> Tango: are you doing this on Atomic 22 or 21?
16:47:16 <Tango> Atomic 21
16:47:25 <Tango> I am going to try Ubuntu
16:47:30 <sdake> apmelton he is building with latest version of kube tho
16:47:39 <apmelton> I'm not sure there
16:47:44 <Tango> This is their version 0.19
16:47:54 <apmelton> was just going to say I know of multiple issues with Docker 1.6.0 in 22
16:48:22 <apmelton> that he may have been hitting via k8s
16:48:26 <sdake> its likely a configuration change
16:48:27 <eghobo> Tango: maybe Fedora, most kub examples at Fedora distro
16:48:30 <sdake> like some config options changed
16:48:34 <sdake> between 0.15 and d0.19
16:48:40 <sdake> and we need to set those new config options in the template
16:48:46 <sdake> remember that mess apmelton
16:48:51 <sdake> or hongbin
16:48:53 * apmelton shudders
16:48:55 <Tango> There is a few options that get deprecated
16:48:57 <sdake> this is what tango is up against here
16:49:21 <Tango> but I may have missed other updates that may be required
16:49:45 <adrian_otto> ok, so considering the importance of this BP, do we have the right number of Stackers working on this, or do we need more help on this?
16:49:50 <adrian_otto> ^^ Tango
16:49:52 <sdake> ya porting to the latest version of kube usually takes 2-3 engineers about 3-4 days in the community of full time work
16:49:53 <apmelton> Tango: a bit later in the week I might be able to help ya poke around
16:50:24 <Tango> Sure, it would be great to have more eyes on just getting k8s to work
16:50:28 <suro-patz> Tango: feel free to ping me, if you need any help on this
16:50:31 <Tango> They are changing so fast
16:50:39 <adrian_otto> ok, so team, please help out on this one if you can.
16:50:46 <sdake> this is why I dont upgrade the images often ;-)
16:50:58 <Tango> Growing pain :)
16:50:59 <hongbin> Tango: Just let me know if you need some help
16:51:01 <sdake> because its a huge time sync keeping up with uberenetes
16:51:24 <Tango> Sounds great, I will give more details on the IRC
16:51:28 <adrian_otto> s/ubernetes/kubernetes/
16:51:37 <sdake> heh
16:51:40 <sdake> ubernetes lol :)
16:51:44 <adrian_otto> Ubernetes is actually a thing (federation for Kubernetes)
16:52:05 <adrian_otto> ok, I have one more BP to get an update on
16:52:07 <sdake> interesting
16:52:14 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/secure-docker Secure client/server communication using TLS (apmelton)
16:52:32 <apmelton> I'll know more after poking around the reviews and the spec
16:52:57 <adrian_otto> ok, we can revisit that one next week
16:53:01 <apmelton> sounds good
16:53:06 <adrian_otto> #topic Open Discussion
16:53:54 <tcammann> Any ideas when you will know about midcycle dates, I need to get approval to fly 5000 miles
16:54:18 <adrian_otto> sdake: did we already put the poll on the ML?
16:54:31 <tcammann> yes he did
16:54:48 <adrian_otto> ok, so let's follow up on that thread with a deadline for selecting the date
16:54:59 <adrian_otto> date for a date
16:55:21 <sdake> i set a deadline
16:55:36 <adrian_otto> great, when is that?
16:55:46 <sdake> from what I can tell ppl don't seem too intereste din attending
16:55:55 <sdake> I dont recall but its in the email post
16:56:00 <adrian_otto> sdake: that always happens
16:56:10 <eghobo> i think deadline today
16:56:13 <adrian_otto> ok, I'll be responding today
16:56:26 <sdake> adrian_otto I'll hand off process to you ok?
16:56:29 <adrian_otto> and my availability is rather limited, so that may help us narrow the date range a bit
16:56:36 <sdake> adrian_otto I have a midcycle I am already executing planning on
16:56:42 <adrian_otto> ok, I'll take that one
16:56:50 <sdake> i just wanted to get it kicked off
16:57:02 <adrian_otto> #action adrian_otto to select a date for the Midcycle and announce to the team members
16:57:21 <adrian_otto> ok, we have just a couple of minutes left
16:57:46 <eghobo> tcammann: any agreement about heat templates?
16:57:53 <adrian_otto> sdake: on the subject of the liberty-1 release, should I tag what's in master right now?
16:57:56 <tcammann> oh yes! thanks eghobo
16:58:01 <adrian_otto> or is there a specific commit I should tag?
16:58:19 <sdake> adrian_otto I'll get to you with hash at noonish
16:58:20 <juggler> Calabasas training opp announcement: http://www.eventbrite.com/e/openstack-quick-start-all-day-community-training-tickets-17622811303
16:58:24 <sdake> I have dentist appointment soon
16:58:30 <adrian_otto> ok, tx.
16:58:39 <adrian_otto> hope your tooth is doing much better by now
16:58:44 <juggler> me too
16:58:46 <sdake> it is not in pain
16:58:48 <sdake> thansk :)
16:58:54 <sdake> it just needs to be sealed
16:59:02 <sdake> anyway yikes lets not talk about it plz :)
16:59:06 <adrian_otto> heh
16:59:10 <tcammann> Can we put that as an action to discuss for next time, the future of heat-coe-templates
16:59:22 <juggler> sdake heh ok
16:59:59 <tcammann> adrian_otto: ^
17:00:12 <adrian_otto> ok, thanks everone. Our next meeting is Tuesday 2200 UTC on 2015-07-14. And our network subteam meeting is on Thursday 2015-07-16 at 1800 UTC.
17:00:23 <juggler> thx ao
17:00:24 <adrian_otto> tcammann: got it, thanks!
17:00:26 <tcammann> ty
17:00:29 <adrian_otto> #endmeeting