22:00:42 <adrian_otto> #startmeeting containers
22:00:42 <openstack> Meeting started Tue Jun  2 22:00:42 2015 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
22:00:43 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
22:00:45 <openstack> The meeting name has been set to 'containers'
22:00:58 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-06-02_2200_UTC Our Agenda
22:01:01 <adrian_otto> #topic Roll Call
22:01:05 <adrian_otto> Adrian Otto
22:01:05 <apmelton> Andrew Melton
22:01:12 <rpothier> Rob Pothier
22:01:13 <juggler> Perry Rivera
22:01:14 <yuanying-alt> OTSUKA, Motohiro
22:01:14 <Tango|2> Ton Ngo
22:01:15 <fawadkhaliq> Fawad Khaliq
22:01:16 <bich_le_> Bich Le
22:01:17 <hongbin> o/
22:01:20 <eghobo> Egor Guz
22:01:24 <tcammann_> Tom Cammann
22:01:31 <mfalatic> o/
22:01:36 <dane_leblanc> Dane LeBlanc
22:01:37 <thomasem> Thomas MAddox
22:01:47 <madhuri_> Madhuri Kumari
22:01:53 <apuimedo> Antoni Segura
22:01:58 <mfalatic> Martin Falatic
22:02:19 <adrian_otto> hello apmelton rpothier juggler yuanying-alt Tango|2 fawadkhaliq bich_le_ hongbin eghobo tcammann mfalatic dane_leblanc thomasem madhuri_ apuimedo mfalatic
22:02:30 <adrian_otto> I think that sets the record for the longest hello reply
22:02:43 <fawadkhaliq> adrian_otto: :)
22:02:46 <juggler> !
22:02:50 <jay-lau-513> o/
22:02:54 <bich_le_> Hello
22:03:08 <adrian_otto> hello jay-lau-513
22:03:11 <brendenblanco> Brenden Blanco
22:03:26 <jay-lau-513> adrian_otto hi
22:03:36 <tobe> Hi all
22:03:36 <adrian_otto> hi brendenblanco
22:04:30 <adrian_otto> hi tobe
22:04:37 <adrian_otto> #topic Announcements
22:04:56 <adrian_otto> 1) I will be on a vacation day next week (camping, limited email probably)
22:05:16 <adrian_otto> so I will be seeking an alternate chair for next week at 1600 UTC
22:05:41 <adrian_otto> please PRIVMSG me for that, and we'll mark the agenda with the name of the pro-tem chair
22:06:05 <adrian_otto> 2) We now have a log-hanging-fruit tag referenced on our contributing page
22:06:11 <adrian_otto> #topic https://wiki.openstack.org/wiki/Magnum/Contributing Contributing to Magnum
22:06:25 <apuimedo> cool
22:06:30 <adrian_otto> #link https://bugs.launchpad.net/magnum/+bugs?field.tag=low-hanging-fruit Low Hanging Fruit
22:06:36 <adrian_otto> so if you are a new contributor, check that
22:06:41 <tobe> Have a great time adrian_otto :)
22:06:44 <adrian_otto> I'll do my best to try to keep that fed
22:06:50 <adrian_otto> thanks tobe
22:07:09 <adrian_otto> also, I linked the "7 Habits" talk form the summit on the Contributing page
22:07:21 <adrian_otto> so if you missed that, I suggest you check that out.
22:07:34 <juggler> ^^very informative
22:07:47 <rbradfor> o/
22:07:58 <suro-patz> o/
22:08:04 <adrian_otto> that concludes our prepared announcement. Other announcements from team members today?
22:08:32 <adrian_otto> remember that milestone 11 is set for June 25
22:08:36 <juggler> survey announcement returns were low...abt 3 respondents. would you like to hear feedback so far?
22:08:55 <adrian_otto> juggler: let's revisit that in Open Discussion
22:08:59 <adrian_otto> you will be first up
22:09:14 <tobe> Thanks adrian_otto
22:09:25 <juggler> np!
22:09:28 <adrian_otto> #topic Review Action Items
22:09:32 <adrian_otto> (none)
22:09:36 <hongbin> I have one
22:09:40 <hongbin> #link https://blueprints.launchpad.net/magnum/+spec/mesos-bay-type
22:09:42 <adrian_otto> #topic Blueprint/Task Review
22:09:52 <adrian_otto> hongbin, proceed.
22:10:15 <hongbin> There are two questions
22:10:27 <hongbin> 1. Which OS for hosting mesos
22:10:36 <hongbin> 2. Which Marathon object to manage
22:10:53 <adrian_otto> what is normally used, or does it express any preference at all?
22:11:09 <hongbin> For #1, there are several choice Ubuntu, CentOS, CoreOS
22:11:19 <eghobo> hogepodge: we use Ubuntu and Centos
22:11:34 <eghobo> I don't hink build for CoreOS exists
22:12:01 <adrian_otto> I don't think CoreOS makes sense, unless it's actually a container image that gets run
22:12:02 <jay-lau-513> hongbin Ubuntu and CentOS mighe be better
22:12:04 <tobe> We prefer CentOS for production, and Ubuntu for development
22:12:12 <eghobo> it was discussion in mail list but I don't remember any actions from it
22:12:15 <jay-lau-513> the mesosphere also have some examples about that
22:12:26 <hongbin> CoreOS works by using container image for mesos
22:12:46 <adrian_otto> that approach should work for all three then
22:13:09 <eghobo> really, we would like run Mesos in container?
22:13:10 <adrian_otto> and would be consistent with how we deploy Swarm bays
22:13:40 <adrian_otto> the Mesos container would have access to the Docker daemon on the host
22:13:50 <adrian_otto> so containers would not necessarily need to be nested
22:14:12 <hongbin> That is true
22:14:14 <eghobo> i think mesos need to be installed the same way as kub
22:14:24 <jay-lau-513> adrian_otto The problem is that we need to hardcode the docker server host to heat template
22:14:46 <adrian_otto> may I please see by raise of hands (o/) who is running Mesos today?
22:14:52 <apuimedo> so we'd pass the docker socket to the mesos container?
22:14:58 <jay-lau-513> +1
22:15:05 <adrian_otto> let's at least get all the input from those users and make something initially that works for them
22:15:06 <jay-lau-513> o/
22:15:10 <adrian_otto> and we can iterate on that
22:15:12 <eghobo> o/
22:15:15 <hongbin> o/
22:15:48 <amit213> o/ (not in production. only in lab)
22:16:17 <adrian_otto> ok, so jay-lau-513 eghobo and hongbin and amit213 can the three of you discuss this in #openstack-containers when we adjourn and decide on what our bay templates should do to begin with
22:16:27 <adrian_otto> we can have alternate templates in our contrib directory
22:16:46 <adrian_otto> so maybe you pick a CentOS or Ubuntu one first, and then add others in contrib
22:16:59 <adrian_otto> and allow operators to give us feedback on which should be the default
22:17:06 <adrian_otto> fair?
22:17:16 <madhuri_> +1
22:17:21 <jay-lau-513> +1
22:17:25 <hongbin> sure
22:17:35 <eghobo> yep, we don't use atomic for example because luck of tools
22:17:36 <amit213> +1
22:17:58 <adrian_otto> ok, cool, so hungbin, on to question #2
22:18:10 <tobe> +1
22:18:12 <hongbin> There are two options for #2
22:18:18 <adrian_otto> "Which marathon object to manage"
22:18:28 <hongbin> 1. Import Marathon App to Magnum
22:18:37 <jay-lau-513> hongbin I see your option in ML, sorry that I forget to reply
22:18:46 <hongbin> 2. Implement Magnum container by using Marathon app
22:19:01 <hongbin> jay-lau-513: NP
22:19:08 <eghobo> hungbin: could you elaborate what do you mean Marathon object?
22:19:16 <hongbin> Marathon app is like a container
22:19:25 <hongbin> but with more features
22:19:42 <hongbin> such as scheduing contratnt, the number of replicas
22:19:54 <hongbin> scheduling contraint
22:19:55 <eghobo> not sure I understand Marathon is framework with web ui
22:20:00 <adrian_otto> eghobo: kubernetes has pods. Mesos has other objects that we may or may not want to model in Magnum
22:20:09 <apmelton> hongbin: could it be implemented as a container like the swarm conductor uses
22:20:20 <apmelton> just with extra, mesos only, params?
22:20:20 <hongbin> eghobo: yes, it has Rest API and UI
22:20:44 <eghobo> adrian_otto: mesos has only application
22:20:57 <jay-lau-513> hongbin the pre-condition is that you may want to install Marathon first if want to use its ap
22:21:03 <jay-lau-513> hongbin ap/api
22:21:12 <adrian_otto> eghobo: ok, got it. (I do not pretent to be a Mesos expert)
22:21:23 <hongbin> apmelton: That would be possible.
22:21:40 <apmelton> I know we're trying to expose scheduling for swarm as well
22:21:52 <apmelton> perhaps you can work with diga to abstract it enough that it works for both
22:22:06 <eghobo> I think we can start app through Marathon the same way  as Kub
22:22:26 <hongbin> jay-lau-513: Yes, we install Marathon first
22:23:24 <adrian_otto> maybe the right approach here is to have a bay template for Mesos+Marathon, and alternate templates in contrib, and don't add any new resource types in Magnum at all for Mesos
22:23:29 <tobe> I'm sorry. Have we decided to just support Marathon?
22:23:41 <adrian_otto> tobe: no, there are just multiple choices
22:23:55 <adrian_otto> we will have docs that show how to specify an alternate template
22:23:58 <eghobo> adrian_otto: +1 for Marathon
22:24:00 <tobe> Yes, that's what I mean
22:24:06 <hongbin> tobe: We decide let magnum talk to marathon
22:24:08 <adrian_otto> so you can get different arrangements of Mesos
22:24:41 <hongbin> adrian_otto: OK
22:24:43 <jay-lau-513> adrian_otto Come to my origianl questions: how to enable end user use this bay via magnum API?
22:25:17 <apmelton> jay-lau-513: I think we'd prefer users to use native clients
22:25:24 <adrian_otto> jay-lau-513: I'm suggesting that to begin with, as a first iteration we only support magnum baymodel-create and magnum bay-create for mesos bay types
22:25:33 <adrian_otto> and then use native clients past that point
22:25:41 <jay-lau-513> adrian_otto aplelton fair enough
22:25:52 <apuimedo> sounds good
22:25:54 <tobe> Marathon is most used to schedule container upon mesos. It's reasonable to support it.
22:25:55 <jay-lau-513> ok, i see. thanks
22:26:00 <adrian_otto> and as the use cases become clear to us, then follow up on that to add additional Mesos resources as needed (application, whatever)
22:26:14 <hongbin> ok
22:26:23 <tobe> But there're some others framework can scheduler containers upon mesos. We should be extensible for them
22:26:37 <adrian_otto> and base that on input we get from those using the Mesos bays
22:26:49 <adrian_otto> tobe, yes that it our intent
22:26:56 <adrian_otto> s/it/is/
22:27:24 <adrian_otto> hongbin, was that the input you were hoping for?
22:27:32 <hongbin> yes
22:27:34 <yuanying-alt> Currently we don't have usecase for mesos bay type. right?
22:27:47 <adrian_otto> we have not documented that in the Magnum spec yet
22:28:08 <adrian_otto> yuanying-alt: we should discuss adding that in a way we can all agree on
22:28:12 <apmelton> we have people running mesos and they want to have magnum magnum the clusters, that sounds like a use case
22:28:15 <apmelton> to me at least
22:28:27 <apmelton> magnu manage the clusters*
22:28:42 <adrian_otto> yes, magnum would be handy for scaling out the bays that run the mesos clusters
22:29:14 <eghobo> apmelton: we have similar use case - playground for dev teams
22:29:38 <apmelton> this might be getting a bit off topic, but have we discussed mechanisms to actually gather metrics on the hosts to use for scaling?
22:30:59 <adrian_otto> apmelton: no, that's not something we have spent much time exploring yet
22:31:10 <apmelton> alright, I was just wondering
22:31:17 <adrian_otto> we did talk about using a scaling group so a bay scale up/down would have a webhook for each
22:31:37 <adrian_otto> so we could have any external control system call those webhooks
22:31:54 <adrian_otto> but the metrics will be unique to each type of bay
22:32:09 <apmelton> would they though?
22:32:19 <adrian_otto> total sum of -m values is one obvious one
22:32:24 <apmelton> at this point, the underlying thing spawning the containers is docker
22:32:37 <adrian_otto> well, with k8s and swarm, yes
22:33:24 <adrian_otto> It might make sense to contribute a spec for that
22:33:24 <eghobo> with mesos you can go with just cgroups
22:33:38 <adrian_otto> so we could collaborate on the plan a bit
22:33:40 <apmelton> adrian_otto: agreed there, there's going to be lots of options
22:34:18 <adrian_otto> ideally to break down the approach into achievable steps toward the autoscaling bay we want
22:34:51 <adrian_otto> it would be awesome if I could demo an autoscaling bay (by sum of -m values) in Tokyo
22:35:33 <adrian_otto> so that once I fall below a threshold of available memory in the bay, that the scale up webhook is called
22:35:40 <juggler> how far off is Tokyo?
22:35:51 <apmelton> late october
22:35:52 <adrian_otto> 6 months. Last week of October.
22:36:34 <tobe> K8s doesn't support autoscalling, right?
22:36:58 <juggler> ah
22:36:59 <apmelton> tobe: I'm not sure anything support auto scaling of the underlying infrastructure at this point
22:37:12 <apmelton> that's kinda one of the big adds for magnum
22:37:33 <eghobo> you can add/remove resources to kub cluster and it should handle it
22:37:35 <adrian_otto> right, apmelton
22:37:49 <hongbin> k8s plan to support autoscaling pod
22:37:52 <hongbin> #link https://github.com/GoogleCloudPlatform/kubernetes/blob/master/docs/proposals/autoscaling.md
22:38:11 <tobe> If the underlying has implemented, we can re-use its api
22:38:24 <tobe> Thanks hongbin
22:38:49 <adrian_otto> lets take that for homework, and get our autoscaling bay blueprint updated
22:38:59 <adrian_otto> I wanted to touch on one of our ML threads:
22:39:02 <adrian_otto> http://lists.openstack.org/pipermail/openstack-dev/2015-June/065261.html
22:39:18 <tobe> And magnum has the unified interfaces for all these frameworks
22:39:19 <adrian_otto> #link http://lists.openstack.org/pipermail/openstack-dev/2015-June/065261.html Does Bay/Baymodel name should be a required option when creating a Bay/Baymodel
22:40:00 <madhuri_> I feel name should not be  a required parameter
22:40:05 <adrian_otto> perhaps we can explore that question a bit here
22:40:11 <jay-lau-513> Thaknks adrian_otto
22:40:14 <eghobo> +1 for required
22:40:21 <adrian_otto> both madhuri_ and I were opposed to requiring names
22:40:38 <adrian_otto> the argument for names is so human usability goes up
22:40:39 <apmelton> adrian_otto: what's the alternative?
22:40:48 <jay-lau-513> name is more readable than UUID
22:40:50 <madhuri_> It makes bay-create for hard
22:41:23 <adrian_otto> implementation of this would essentially require magnum to raise an exception if you attempt a magnum baymodel-create or magnum bay-create with no name assigned
22:41:26 <madhuri_> So it depends on user if he doesn't pass name
22:41:33 <madhuri_> He will have to use uuid
22:41:56 <adrian_otto> a required name with a unique requirement is just a user supplied uuid
22:42:02 <jay-lau-513> but then when he want to operate a bay, he need first retrive the uuid first
22:42:06 <eghobo> bay create need to be hard ;) because you are building cluster which potentially will stay for long time
22:42:32 <jay-lau-513> this will bring trouble and time consuming
22:42:46 <apmelton> I think we should go with the nova model, where names aren't unique, and if there are more than one, we require the use of a uuid
22:43:05 <jay-lau-513> take nova as example, I prefer to use name to delete VM but not uuid as I do not want to retrieve UUID every time
22:43:05 <adrian_otto> jay-lau-513: you get the uuid in the response to the bay-create
22:43:12 <juggler> would having uuid being a default be troublesome? that is, it's used explicitly unless a uuid is specified?
22:43:14 <jay-lau-513> and I cannot remenmber it
22:43:37 <juggler> explicitly/implicitly uh :)
22:43:38 <madhuri_> apmelton: In case of multiple bays with same name, user will have to use uuid
22:43:40 <adrian_otto> if you create a docker container and do not specify a name, one is generated for you
22:43:44 <eghobo> apmelton: I think we should consider unique name per tenant
22:43:46 <adrian_otto> but names are not required
22:43:57 <jay-lau-513> adrian_otto yes, but I may not able to remember it ;-(
22:43:58 <adrian_otto> however names are enforced to be unique by docker
22:44:04 <eghobo> but docker will generate them ;)
22:44:11 <madhuri_> That will also be the similar case as not passing name while creating bay
22:44:14 <jay-lau-513> madhuri_ right, need uuid for such case
22:44:19 <adrian_otto> so maybe we should have Magnum follow that same pattern
22:44:30 <adrian_otto> auto-generated names if the client does not supply one
22:44:36 <Tango|2> I think fewer mandatory things is more user friendly.
22:44:43 <adrian_otto> for BayModel and Bay resources
22:44:51 <Tango|2> User can always provide name
22:44:55 <madhuri_> + Tango|2
22:44:57 <adrian_otto> but error on Name duplicates
22:45:06 <tobe> +1 for Tango|2
22:45:11 <jay-lau-513> adrian_otto the auto generated names is similar with UUID, difficult to remember
22:45:11 <juggler> or maybe a prefixed name, e.g. start with something the user can then patternsearch for?
22:45:29 <eghobo> adrian_otto: do you mean docker or nova pattern?
22:45:40 <juggler> e.g. foo_38bc
22:46:27 <madhuri_> adrian_otto: I am not sure whether this would be a good idea or not. Can we allow users to update name in magnum?
22:46:39 <suro-patz> things that are often created/destroyed should not mandate name, e.g. docker - but for things which are supposed to live longer, we should better have name mandatory - viz.bay/baymodel
22:46:53 <madhuri_> So that user can update name at later time also
22:47:02 <adrian_otto> eghobo: docker pattern, but similar to nova
22:47:11 <jay-lau-513> madhuri_ imho, the name should not be updated
22:47:21 <adrian_otto> madhuri_: sure, they can do a bay update replace name=whatever
22:47:25 <adrian_otto> what?
22:47:29 <adrian_otto> jay-lau-513: why?
22:47:36 <madhuri_> So we have the option then
22:47:48 <madhuri_> If user doesn't pas name, later he can update it
22:47:59 <madhuri_> if he doesn't want to use uuid
22:48:13 <jay-lau-513> adrian_otto Does there are any use cases that the name need to be updated?
22:48:27 <madhuri_> I found one now jay-lau-513
22:48:35 <adrian_otto> what if I name it something stupid by mistake
22:48:45 <adrian_otto> and I have production stuff in that bay that I don't want to kill
22:48:50 <madhuri_> Add a name later when not given at bay-create
22:48:52 <jay-lau-513> adrian_otto I see
22:48:54 <adrian_otto> I might use bay update to fix that
22:49:05 <tobe> Now I'm confused  about using uuid and name or just unrepeatable name?
22:49:28 <suro-patz> this sounds more like a use-case for "description" than name
22:49:32 <Tango|2> We should go back and understand the intended use of the name.  Is it for display purpose, or id purpose?
22:49:40 <madhuri_> tobe: Name need not to be unique
22:49:42 <adrian_otto> tobe: I'm suggesting that if the user does not supply a name, we generate a human readable name, and that we enforce names to be unique
22:50:15 <hongbin> +1
22:50:18 <adrian_otto> the rationale there is if you have the same name for multiple bays, you still have to fall back to using uuids to address them anyway
22:50:22 <apmelton> adrian_otto: I don't think that's a good idea
22:50:29 <apmelton> the uniqueness on name
22:50:30 <adrian_otto> apmelton: ok, why?
22:50:45 <adrian_otto> if you assign a name to a bay SUPER_BAY
22:50:47 <tobe> adrian_otto: Just like how docker works, right? +1 for this
22:51:01 <brendenblanco> IMHO, if a name is a unique property, then it should not also be mutable...use an optional description field instead
22:51:02 <adrian_otto> and then you go to act on SUPER_BAY and you get an exception because there are two named that
22:51:15 <adrian_otto> that's worse than just using UUID to act on them to begin with
22:51:33 <eghobo> adrian_otto: +1
22:51:33 <adrian_otto> tobe: yes, that it how donker works.
22:51:38 <apmelton> lets say at some point we implement sharing of a bay between tenants
22:51:51 <madhuri_> I don't agree on this
22:51:54 <apmelton> so you go to share your TEST_BAY with a tenant, and they already have one named that
22:52:15 <adrian_otto> apmelton: I need to think on that
22:52:18 <madhuri_> When we already have uuid as unique property, then why do we want to make name also a unique property?
22:52:32 <adrian_otto> we are approaching our end time, and I want to get to Open Discussion
22:52:41 <juggler> adrian_otto: +1 uniqueness. I think people who use docker will find it intuitive to use
22:52:42 <adrian_otto> let's keep the discussion open on the ML
22:52:42 <apmelton> so before we're done with blueprints
22:52:52 <apmelton> I need to push this out to l-2 https://blueprints.launchpad.net/magnum/+spec/async-container-operations
22:52:53 <jay-lau-513> apmelton +1, different tenant can have same name bays
22:53:03 <adrian_otto> and seek consensus there after we have thought through these other use cases
22:53:38 <apmelton> I'm hoping that I can finish up whats blocking me from magnum stuff by the end of next week so I can pick this up ASAP https://blueprints.launchpad.net/magnum/+spec/secure-docker
22:53:38 <adrian_otto> we will defer other work item discussion to our next meeting, or in #openstack-containers
22:54:03 <adrian_otto> apmelton: anything magnum team members can do to help you?
22:54:22 <adrian_otto> #topic Open Discussion
22:54:29 <adrian_otto> juggler: tell us about the survey
22:54:40 <apmelton> adrian_otto: when I get some time, I'll break async-container-ops into some pieces people can pick up
22:54:43 <juggler> initial survey results here: http://paste.openstack.org/show/257490/
22:54:54 <adrian_otto> apmelton: sounds great, tx!
22:55:08 <juggler> rolled-up
22:55:25 <adrian_otto> juggler: maybe we should feature the survey on our project wiki page?
22:56:07 <adrian_otto> too few respondents to put weight on the results.
22:56:09 <juggler> +1
22:56:26 <juggler> I'll try to figure placement on the page
22:57:47 <adrian_otto> ok, good discussion. Let's keep advancing on each of these tough issues.
22:57:52 <adrian_otto> we have 3 minutes remaining.
22:57:59 <adrian_otto> any other topics that might fit?
22:58:30 <adrian_otto> Our next team meeting will be on 2015-06-09 at 1600 UTC, chair TBD.
22:58:38 <adrian_otto> thanks everyone for attending today!!
22:58:42 <thomasem> Cheers!
22:58:45 <jay-lau-513> thx
22:58:47 <apmelton> have a good evening/morning/day everyone!
22:58:55 <juggler> bug 1451678..running into a snag during git review. let me know if you have a few mins to help. thanks!
22:58:55 <openstack> bug 1451678 in Magnum trunk "Add link to dev-manual-devstack.rst into document dev-quickstart.rst" [High,Triaged] https://launchpad.net/bugs/1451678 - Assigned to P Rivera (juggler)
22:59:09 <juggler> thanks all
22:59:13 <bich_le_> Thanks, see you next time.
22:59:15 <madhuri_> Thanks all
22:59:25 <adrian_otto> #endmeeting