22:00:45 <adrian_otto> #startmeeting containers
22:00:46 <openstack> Meeting started Tue Apr 21 22:00:45 2015 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
22:00:47 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
22:00:49 <openstack> The meeting name has been set to 'containers'
22:00:53 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-04-21_2200_UTC Our Agenda
22:00:58 <adrian_otto> #topic Roll Call
22:01:01 <adrian_otto> Adrian otto
22:01:05 <jjlehr> Janek Lehr
22:01:13 <rpothier> Rob Pothier
22:01:16 <Tango> Ton Ngo
22:01:18 <madhuri____> Madhuri Kumari
22:01:23 <fangfenghua> Da
22:01:28 <sdake> o/
22:01:30 <fangfenghua> Dance ghoul
22:01:42 <vahidh> Vahid Hashemian
22:01:42 <joffter> Jennifer Carlucci
22:01:45 <apmelton> Andrew Melton
22:01:46 <suro-patz> suro-patz
22:01:51 <thomasem> Thomas Maddox
22:02:00 <sdake> wow we got an army of people present :)
22:02:07 <adrian_otto> hello jjlehr, rpothier, Tango, madhuri____, fangfenghua, sdake, vahidh, apmelton, suro-patz, and thomasem
22:02:08 <fangfenghua> Fangfenhua
22:02:12 <adrian_otto> that's a quorum.
22:02:31 <adrian_otto> juggler: you had a topic to add?
22:02:48 <juggler> o/
22:02:59 <juggler> topic sent
22:03:02 <juggler> :)
22:03:03 <adrian_otto> oh, I see that in PRVMSG
22:03:13 <adrian_otto> you will have first crack in Open Discussion for that
22:03:27 <adrian_otto> we should have time
22:04:02 <adrian_otto> ok, advancing topics
22:04:07 <adrian_otto> #topic Announcements
22:04:20 <adrian_otto> 1) Our IRC Meeting will be skipped on 2015-05-19 because we will be at the Vancouver Design Summit
22:04:30 <adrian_otto> my apologies to those of you not attending the summit
22:04:58 <adrian_otto> so be sure to raise your topics in the previous week, or using the ML during that time.
22:05:05 <adrian_otto> 2) I am planning to tag a release of Magnum and python-magnumclient on Saturday 2015-04-25.
22:05:14 <adrian_otto> is there any reason to change this plan?
22:05:21 <sdake> magnumclient
22:05:28 <sdake> tags x.y.z
22:05:33 <sdake> not 2015.1.0
22:05:38 <adrian_otto> that's right
22:05:46 <adrian_otto> it's 0.1.0 now
22:05:51 <sdake> right
22:05:56 <adrian_otto> to be compatible with pypi
22:05:57 <sdake> ok, just informing
22:06:03 <sdake> wasn't sure if you knew or not
22:06:09 <adrian_otto> yes, thanks!
22:06:22 <sdake> In the futue I expect we won't need to lockstep release our client
22:06:43 <adrian_otto> we do have new features landing that have support in both
22:07:02 <adrian_otto> but if we don't have new client/server paired features then we might only release one or the other, correct.
22:07:13 <adrian_otto> any other comments on release plans?
22:07:28 <adrian_otto> 3) I am working with the author of https://pypi.python.org/pypi/magnum to see if we can arrange to use that namespace.
22:07:58 <adrian_otto> otherwise we will use something like https://pypi.python.org/pypi/magnum
22:08:08 <juggler> sounds good
22:08:19 <sdake> did you run that by 3openstack-infra?
22:08:27 <sdake> the second option
22:08:31 <adrian_otto> if needed, I will
22:08:37 <adrian_otto> first things first. :-)
22:08:40 <sdake> I would like if you would please :)
22:08:41 <apmelton> adrian_otto: were those supposed to be two different links?
22:08:44 <sdake> yes jus tthinking ahead
22:08:50 <adrian_otto> my paste goofed
22:09:02 <adrian_otto> openstack-magnum would be the alternative fallback approach
22:09:07 <apmelton> gotcha
22:09:25 <sdake> i on't know if openstack-infra would like that or not adrian_otto
22:09:31 <sdake> please confirm - had irc discussion last week on this topic
22:09:37 <sdake> and borught that up, seemed to be contentious
22:10:02 <adrian_otto> ok, I want to cross that bridge when we come to it, especially if it will be controversial
22:10:27 <sdake> cool anything permanent wrt namespcces - is well permanent :)
22:10:34 <sdake> so thats the controversy I think
22:10:37 <sdake> anyway we can move on :)
22:10:43 <adrian_otto> 4) PTL Elections closed, I will serve as your PTL for the Liberty release.
22:11:04 <juggler> +1
22:11:08 <apmelton> congrats!
22:11:11 <mfalatic> \o/
22:11:19 <madhuri____> congrats
22:11:24 <adrian_otto> Thanks everyone
22:11:25 <suro-patz> +1
22:11:26 <Tango> +1
22:11:28 <juggler> wtg adrian
22:11:32 <vahidh> +1
22:11:41 <sdake> grats bro :)
22:11:44 <adrian_otto> but I did not win an election because I was unopposed
22:11:45 <joffter> congratulations!
22:11:55 <sdake> irrelevant
22:12:05 <adrian_otto> ok, I appreciate your congratulations!
22:12:15 <juggler> agreed, sdake
22:12:21 <adrian_otto> any other announcements from team members?
22:12:39 <sdake> adrian_otto but I used that line on kolla too :)
22:12:59 <adrian_otto> <3
22:13:14 <adrian_otto> ok, let's advance to action items
22:13:20 <adrian_otto> #topic Review Action Items
22:13:29 <adrian_otto> 1) adrian_otto to poll participants for a discussion with k8s devs about external-lb feature
22:13:42 <adrian_otto> completed. I can pause a moment and see what responses came back
22:14:42 <adrian_otto> there were no responses. I'll need to follow up with them
22:14:51 <Tango> It appears that the current code use V1 of LBaaS
22:15:02 <Tango> We can ask when they plan to move to V2
22:15:03 <adrian_otto> right, we had some good news on that BP
22:15:28 <sdake> which version is in kilo?
22:15:37 <adrian_otto> #action adrian otto to regroup with k8s devs to select a time to discuss external-lb
22:15:40 <Tango> I think V2
22:15:40 <adrian_otto> v1
22:15:47 <adrian_otto> uuuh
22:15:51 <Tango> oh ok
22:16:03 <adrian_otto> actually that may be moving more quickly than I have been tracking it
22:16:10 <adrian_otto> so ignore my comment
22:16:13 <sdake> can we gt a na ction to confirm
22:16:27 <adrian_otto> who would like to take that one?
22:16:33 <Tango> I can do that
22:16:39 <sdake> third party confirmation
22:17:01 <adrian_otto> #action Tango to confirm what version of LBaaS API is in Kilo, and what versions k8s supports
22:17:01 <sdake> actually i guess it doesn't matter
22:17:11 <adrian_otto> that the right action?
22:17:17 <sdake> wfm
22:17:20 <adrian_otto> kk
22:17:32 <adrian_otto> #topic Blueprint/Task Review
22:17:46 <adrian_otto> so this one is a cluster of a few links
22:17:47 <adrian_otto> 1) Following two patches should be in a Liberty branch, or after a Kilo branch is made.
22:17:53 <adrian_otto> #link https://review.openstack.org/174209 Update rc support a manifest change
22:18:00 <adrian_otto> #link https://review.openstack.org/174208 Update service support a manifest change
22:18:14 <adrian_otto> so the question is where and when to merge those
22:18:29 <adrian_otto> sdake, you have remarks on this?
22:18:34 <sdake> yes
22:18:44 <sdake> branch kilo on 4/25
22:18:51 <sdake> from master
22:19:06 <sdake> ping ttx for details on how to do this (I honestly don't know how its done)
22:19:13 <sdake> then all new changes go into master
22:19:16 <adrian_otto> ok, I think I know where those notes are
22:19:18 <sdake> master becomes new liberty
22:19:33 <sdake> we backport to kilo
22:19:39 <sdake> or submit changes to kilo
22:19:41 <sdake> for the rc series
22:19:57 <sdake> preferrably backport
22:20:10 <sdake> tag 2014.1.0.rc1
22:20:13 <sdake> or whatever its called
22:20:20 <sdake> and each rc gets a new tag on the branch
22:20:35 <sdake> how to actually do all the tagging with gerrit - not sure
22:20:38 <sdake> i can find out if you like
22:20:38 <adrian_otto> yep, that makes sense
22:20:56 <adrian_otto> I got the tagging stuff down
22:20:57 <sdake> but i am super overloaded atm
22:21:02 <sdake> cool
22:21:13 <adrian_otto> I think we might be missing the tarball job though
22:21:22 <sdake> its definatey in there
22:21:27 <sdake> whether it works or not is a different story
22:21:30 <adrian_otto> ok, good
22:21:36 <sdake> we can spin rcs as needed for the tarball job
22:21:57 <sdake> ttx can hep here, he knows what to do
22:22:01 <sdake> just have to ask nicelly :)
22:22:12 <sdake> (re tarblal job)
22:22:23 <adrian_otto> ok, got it.
22:22:30 <sdake> i can't actually commit for him tho
22:22:37 <sdake> so there ya go :)
22:22:38 <adrian_otto> is Lan Qi Song present with us today?
22:22:56 <adrian_otto> was not in the roll call
22:23:13 <adrian_otto> so sdake, on the subject of the -2
22:23:24 <adrian_otto> will you be arond on 4/25 to lift that?
22:23:28 <sdake> ack
22:23:36 <sdake> ping me on irc
22:23:42 <adrian_otto> ok, tx.
22:23:43 <sdake> and i'll remove immediately
22:23:46 <adrian_otto> cool.
22:23:47 <sdake> or i can remove now and yu can apply -2
22:23:55 <adrian_otto> that's what I was going to suggest.
22:24:02 <adrian_otto> let's do that now while we are thinking of it
22:24:04 <sdake> after meeting i'll remove aftery ou add
22:24:09 <sdake> or we can do now
22:24:14 <sdake> have links?
22:24:18 <adrian_otto> will do now
22:24:27 <sdake> i need to remove my -2 i think
22:24:41 <sdake> i don't think you can
22:25:03 <adrian_otto> done
22:25:15 <adrian_otto> mine are on, so remove yours at your leisure.
22:25:27 <adrian_otto> next work item disucssion
22:25:29 <adrian_otto> 2) Coordinate synchronized documentation change with this patch:
22:25:34 <sdake> got it
22:25:41 <sdake> its done
22:25:49 <adrian_otto> #link https://review.openstack.org/173763 Update Kubernetes version for supporting v1beta3.
22:26:07 <adrian_otto> so this sounds like the patch may be overtaken by events
22:26:08 <madhuri____> No response from them yet
22:26:10 <adrian_otto> and may not be needed
22:26:25 <adrian_otto> but I was reluctant to vote on this without further clarity
22:26:27 <sdake> events?
22:26:28 <madhuri____> Yes may be not now
22:26:46 <adrian_otto> sdake: need may be obviated by a new default
22:27:03 <sdake> we can sync up after meeting
22:27:08 <sdake> i'm out of the loop on that discussion
22:27:13 <adrian_otto> so my question is what harm would come from merging this?
22:27:22 <adrian_otto> even if the default has changed?
22:27:39 <madhuri____> The Kubernetes cluster has 0.11.0 v
22:27:46 <adrian_otto> as long as that URI remains working in k8s, then we can use that, right?
22:27:51 <sdake> version incompatible
22:28:00 <sdake> but we got rid of kubernetes kube calls right
22:28:01 <madhuri____> But after ths merge, the kubectl will be 0.15.0 release
22:28:05 <sdake> via cli?
22:28:15 <adrian_otto> this is about transitioning from cli to API calls
22:28:20 <sdake> right
22:28:21 <madhuri____> Yes after we merge our API code
22:28:34 <sdake> madhuri can you confirm with the api code merge, cli will no longer be used?
22:28:40 <madhuri____> In that case we don't need kubectl
22:28:59 <madhuri____> We can remove it from dev guide
22:29:03 <adrian_otto> madhuri____: in that case I'd like to see a quickstart update in the patch too
22:29:11 <madhuri____> Ok sure
22:29:12 <sdake> can you confirm this will land by 4/25 madhuri?
22:29:12 <adrian_otto> exactly, so that happens all at once
22:29:42 <madhuri____> I will mail on the ml the issues currently with the patch.
22:29:56 <adrian_otto> ok, cool. Thanks madhuri____
22:29:57 <sdake> madhuri perfect
22:30:01 <sdake> can we get  an action
22:30:05 <madhuri____> I will be submitting the patch today
22:30:19 <sdake> just for followup
22:30:54 <adrian_otto> #madhuri____ to begin a ML thread to explain approach for migration off of kubectl onto the k8s API. See: https://review.openstack.org/173763
22:30:58 <adrian_otto> #undo
22:30:59 <openstack> Removing item from minutes: <ircmeeting.items.Link object at 0xad102d0>
22:31:13 <adrian_otto> #link https://review.openstack.org/173763 Update Kubernetes version for supporting v1beta3.
22:31:16 <madhuri____> To track issues merging v1beta3 in k8s 0.11.0
22:31:50 <adrian_otto> #action madhuri____ to begin a ML thread to track issues merging b1beta3 in k8s 0.11.0. See: https://review.openstack.org/173763
22:31:53 <madhuri____> This need to be an action item
22:31:55 <sdake> does ## name act as #action?
22:31:56 <adrian_otto> is that action fair?
22:32:35 <madhuri____> The review link should change
22:32:55 <adrian_otto> that's fine. It's only informative.
22:32:57 <madhuri____> https://review.openstack.org/#/c/170414/ adrian_otto
22:33:01 <madhuri____> This one
22:33:01 <sdake> does #madhuri set an action?
22:33:13 <adrian_otto> sdake: no, that was an error
22:33:18 <juggler> he undid that
22:33:40 <adrian_otto> we did record the action, so I'm going to advance topics unless there is more to cover on this now.
22:33:42 <sdake> then redid it jugler
22:34:08 <juggler> hmm I only see one #madhuri*
22:34:20 <adrian_otto> we got it.
22:34:26 <sdake> if peopel are onfued i guess they can rea dthe ogs )
22:34:41 <adrian_otto> 3) Discuss where the fix for this should live:
22:34:43 <adrian_otto> #link https://bugs.launchpad.net/magnum/+bug/1446372
22:34:43 <openstack> Launchpad bug 1446372 in Magnum "Bays spawned from devstack dont have external network access" [Undecided,New]
22:34:53 <adrian_otto> this one was a question raised by apmelton
22:35:05 <sdake> we dcided on irc dev docks
22:35:05 <apmelton> so sdake had an interesting take on this
22:35:32 <sdake> docker/docs:)
22:35:48 <apmelton> basically we'll add some documentation that demonstrates the usage of local.sh to run the command
22:36:17 <adrian_otto> what about making the masqyerade a devstack module
22:36:35 <adrian_otto> so you decide if you want it using a configuration directive in localrc or whatever
22:37:08 <sdake> localrc can haee additional configuraiton so I don' think thatwill work
22:37:34 <apmelton> adrian_otto: I think local.sh is going to be fairly straight forward
22:37:36 <adrian_otto> I suppose magnum users are already following a set of directions
22:37:47 <adrian_otto> so we could just have this as one of the steps
22:38:03 <sdake> yup that is the proposal
22:38:20 <adrian_otto> I can live with that
22:38:28 <juggler> step in the quickstar or the contributing page steps?
22:38:33 <adrian_otto> apmelton: your perspective?
22:38:35 <juggler> quickstart, correction
22:39:11 <adrian_otto> ok, any opposing views to consider?
22:39:12 <apmelton> adrian_otto: I think using local.sh is no more complex than what we already instruct people to do with local.conf
22:39:26 <adrian_otto> ok, I'm convinced
22:39:47 <sdake> 3 +2s looks like we hae  winne:)
22:39:49 <apmelton> juggler: I think I'll be in the quickstart
22:40:03 <juggler> cool
22:40:39 <apmelton> rather, I think it makes the most sense there
22:40:44 <adrian_otto> ok, next work item
22:40:50 <adrian_otto> 4) Discuss what to do about replacePod API
22:40:57 <adrian_otto> #link https://review.openstack.org/175784 Remove duplicate replacePod API
22:41:10 <adrian_otto> this patch looked to me like it was turning some functions into a docstring
22:41:37 <adrian_otto> so I was going to leave remarks on that basis, but then I started thinking about the root cause for this
22:41:49 <adrian_otto> and that needs to be clearly expressed in a bug
22:41:54 <madhuri____> I had discussed about this with google guy
22:42:08 <adrian_otto> we have this:
22:42:12 <adrian_otto> #link https://bugs.launchpad.net/magnum/+bug/1446529
22:42:12 <madhuri____> and he said this ia an issue in their swagger-spec
22:42:12 <openstack> Launchpad bug 1446529 in Magnum "pod-update fail with 404 status for an existing pod" [Undecided,In progress] - Assigned to Madhuri Kumari (madhuri-rai07)
22:42:13 <sdake> madhuriwhichgoooleguy
22:42:29 <madhuri____> Nikhil Jindal
22:42:34 <adrian_otto> so this code came from swagger?
22:42:36 <madhuri____> He is working on swagger-spec
22:42:58 <madhuri____> Yes swagger-spec from k8s
22:43:31 <adrian_otto> so to understand this, k8s is busted, right?
22:43:45 <adrian_otto> and we want a magnum workaround for that?
22:43:49 <madhuri____> Once it gets fixed upstream we can remove this fix
22:44:00 <sdake> wfm
22:44:00 <madhuri____> Yes adrian_otto
22:44:09 <adrian_otto> ok, but let's not do this as a docstring
22:44:22 <madhuri____> Ok I will remove it then
22:44:22 <adrian_otto> is there a cleaner way?
22:44:26 <sdake> thi is what stable/kilo exisssts or
22:44:45 <adrian_otto> I'm fine just commenting that section of code if we expect to be putting it back rather soon
22:44:50 <madhuri____> To remove the method is a cleaner way
22:44:59 <adrian_otto> if not, record the removed code in a tech-debt bug ticket
22:45:10 <adrian_otto> and let's just remove that code for now
22:45:19 <adrian_otto> we can easily put it back once the upstream fix lands
22:45:22 <madhuri____> I will create a bug ticket for it
22:45:34 <adrian_otto> thanks.
22:45:47 <sdake> madhuri++
22:45:58 <madhuri____> Thanks
22:45:59 <adrian_otto> so the https://review.openstack.org/175784 patch can be revised to just be a full removal
22:46:16 <madhuri____> Yes
22:46:22 <adrian_otto> and let's update the commit message to explain where we are archiving this as tech debt to repay when upstream is working again.
22:46:28 <adrian_otto> everyone okay with this?
22:46:36 <madhuri____> Ok sure
22:46:47 <juggler> ok  here
22:46:51 <sdake> +1
22:46:54 <fangfenghua> +1
22:46:58 <madhuri____> I am ok with this :)
22:47:04 <adrian_otto> ok, I am going to open us for Open Discussion, and let juggler have the first go.
22:47:07 <adrian_otto> #topic Open Discussion
22:47:27 <juggler> offhand, are ppl here primarily testing magnum on baremetal systems?
22:47:35 <juggler> i'm wondering if it's suitable to recommend on one of the contributing pages that using a vm (e.g. a virtualbox on Windows, for example) is not recommended
22:47:53 <juggler> thoughts/input?
22:47:54 <adrian_otto> for trying out magnum it really should not matter
22:48:24 <adrian_otto> just wherever you can succesfully run devstack, right?
22:48:35 <apmelton> so I guess my question is, has anyone actually succesfully run magnum in a vm
22:48:39 <sdake> diagree with vm approach - will give people negative view of perforance
22:48:53 <apmelton> adrian_otto: what is your definition of 'succesfully run devstack'?
22:49:21 <adrian_otto> apmelton: to have a working openstack cloud, started by the stack.sh script
22:49:22 <juggler> yeah, i'm trying to get devstack to run in a vm and still having issues. wondering if it's even possible or has been attempted
22:49:25 <sdake> dev-qucikstartshoud e rnnable in 20 minute
22:49:27 <sdake> s
22:49:30 <adrian_otto> in whatever environment you choose to run it in
22:49:36 <juggler> at least with the current devstack+magnum
22:49:40 <adrian_otto> performance expectations should be shaped with docs
22:49:57 <mfalatic> minimum required hardware should be as well.
22:50:00 <adrian_otto> but devstack exists for a reason, and it's most commonly deployed in VM's for convenience.
22:50:23 <mfalatic> (If you're going to try this in Devstack, that is)
22:50:33 <fangfenghua> I deploy success in cm
22:50:36 <fangfenghua> Vm
22:50:39 <adrian_otto> is the fact that devstack takes a long time to run in a VM the source of your concern, sdake ?
22:50:39 <suro-patz> I learned from couple of iteration that the VM should at least have 16G mem to host devstack for magnum
22:50:42 <sdake> in vms will take hours to deploy iirc
22:50:44 <apmelton> adrian_otto: but how many people doing that are doing anything more than running cirros
22:50:52 <sdake> adrian_otto ye
22:51:00 <apmelton> cirros instances*
22:51:36 <sdake> cirsso si *not* on eof our os choices
22:52:04 <thomasem> sdake, are you typing with chop sticks?
22:52:13 <sdake> been up for two days
22:52:16 <sdake> need to hit the rack
22:52:16 <thomasem> :( aww
22:52:17 <adrian_otto> we should recommend the ideal hardware match, and disclaim YMMV if you choose an alternate approach that is a lower performance option.
22:52:18 <apmelton> sdake: what I'm asking is, how many people are actually consuming the products of the services in devstack rather than just the services
22:52:40 <adrian_otto> if people ignore that and deploy on VMs, and don't like the performance, that's their choice, right?
22:52:52 <juggler> and in the longterm, will we be supporting folks on VMs is well is also a potential consideration
22:53:01 <juggler> or something like that
22:53:05 <sdake> i think running magnum quickstart in vms will take hors
22:53:21 <adrian_otto> on crummy laptops maybe
22:53:24 <apmelton> adrian_otto: I think if we tell people they can run magnum+devstack in vms we're going to constantly be dealing with people who can't get it to work
22:53:39 <juggler> I resemble that remark adrian_otto! :)
22:53:43 <suro-patz> apmelton: +1
22:53:45 <apmelton> adrian_otto: I've tried magnum+devstack on our perf1-16G flavor
22:53:49 <apmelton> even that didn't work
22:54:00 <sdake> apmelton +1
22:54:02 <mfalatic> How much disk/memory/cpu are people using where they actually get it to work in Devstack?
22:54:14 <juggler> mfalatic in a vm or baremetal or both?
22:54:24 <mfalatic> Hmm, either.
22:54:37 <adrian_otto> another approach is to have a recommended setup for cloud operators (how to wire this to your cloud)
22:54:46 <adrian_otto> and a how to develop on magnum
22:54:47 <apmelton> mfalatic: https://gist.github.com/ramielrowe/8a5392d707c5fe217e49
22:54:52 <apmelton> that's with a 3 node cluster
22:54:57 <apmelton> 3 node bay*
22:54:57 <adrian_otto> we have two documents for this now
22:54:57 <suro-patz> mfaltic: I found out that on VM it needs at least 16G to bring up all the instances - but I ran into the networking issue
22:55:12 <mfalatic> oic ok.
22:55:17 <thomasem> wow
22:55:37 <mfalatic> Will just upgrade the memory in my MacBoo---- oh.
22:55:40 <sdake> ??
22:55:53 <thomasem> Just the amount of mem used
22:55:53 <adrian_otto> wait, devstack is taking 16GB now?
22:56:02 <juggler> ouch
22:56:22 <thomasem> Is that with a bunch of other peripheral services?
22:56:28 <thomasem> that don't need to be on?
22:56:42 <suro-patz> adrian_otto: when I ran devstack on a 8G flavor, it was not able to bring up all the three instances together due to memory issue
22:57:01 <adrian_otto> three instances of what?
22:57:19 <suro-patz> three instance of nested VM to form the cluster
22:57:26 <fangfenghua> Bay node
22:57:30 <fangfenghua> I guess
22:57:30 <adrian_otto> so you are talkigna bout starting the bay
22:57:56 <adrian_otto> ok, so maybe we need a smaller flavor option for running the k8s cluster?
22:57:56 <juggler> we may be getting close to moving the discussion, btw
22:58:12 <apmelton> 2 minutes left
22:58:27 <sdake> bug triage needs dsicussion
22:58:27 <juggler> correct
22:58:32 <adrian_otto> ok, let's wrap up here. juggler, please take this to the ML
22:58:50 <adrian_otto> let's get a well reasoned soltuion
22:59:12 <apmelton> adrian_otto: we may also want to see how the heat developers handle this
22:59:16 <sdake> develoeprs (ormain constitutient atm) is focused around devstack on bare metal
22:59:26 <adrian_otto> our next meeting is Tuesday 2015-04-28 at 1600 UTC.
22:59:27 <apmelton> I'm sure they have to have functioning instances in their development environments
22:59:45 <sdake> lets go to #openstack-containers for overlflow lase
22:59:49 <adrian_otto> thanks everyone for attending. See you all next week.
22:59:51 <apmelton> o/
22:59:55 <adrian_otto> #endmeeting