16:02:35 <adrian_otto> #startmeeting containers
16:02:36 <openstack> Meeting started Tue Dec  8 16:02:35 2015 UTC and is due to finish in 60 minutes.  The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:02:38 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:02:41 <openstack> The meeting name has been set to 'containers'
16:02:49 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-12-08_1600_UTC Our Agenda
16:02:53 <adrian_otto> #topic Roll Call
16:02:56 <adrian_otto> Adrian Otto
16:03:07 <dane> Dane LeBlanc
16:03:08 <tbh> tbh
16:03:10 <wanghua> o/
16:03:12 <houming> HouMing Wang
16:03:12 <Tango> Ton Ngo
16:03:15 <hongbin> o/
16:03:17 <rpothier> Rob Pothier
16:03:17 <yanghy> Hongyang Yang
16:03:19 <bradjones> o/
16:03:25 <eghobo> o/
16:03:41 <daneyon_> o/
16:03:53 <adrian_otto> hello dane tbh wanghua houming Tango hongbin rpothier yanghy bradjones eghobo daneyon_
16:03:56 <juggler> Perry Rivera o/
16:04:11 <adrian_otto> hello juggler
16:04:22 <juggler> hello aotto!
16:04:22 <adrian_otto> let's begin
16:04:28 <adrian_otto> #topic Announcements
16:04:35 <adrian_otto> any announcements from team members?
16:04:57 <Kennan> o/
16:05:16 <adrian_otto> #topic Review Action Items
16:05:29 <adrian_otto> 1) adrian_otto to raise "#link https://review.openstack.org/250999 Add docker-bootstrap (wanghua)" for discussion in next week's agenda
16:05:44 <adrian_otto> this will be complete by the conclusion of today's meeting
16:06:13 <adrian_otto> I see the addition of a second that was not part of last weeks agenda, but for sake of inclusion: Yang Hongyang to raise discussion of "#link https://review.openstack.org/#/c/252779 Remove unnecessary setting of default node_count"
16:06:38 <adrian_otto> we can address that in the subsequent agenda items, so stand by for that
16:06:52 <adrian_otto> #topic Container Networking Subteam Update (daneyon)
16:06:55 <yanghy> sorry, seems I added that to the wrong place
16:07:08 <daneyon_> i was on vacation last Thurs
16:07:17 <daneyon_> we did not have a subteam meeting
16:07:24 <adrian_otto> yanghy: no worries. we will cover it.
16:07:26 <daneyon_> we will resume this Thurs
16:07:38 <adrian_otto> ok, thanks daneyon_
16:07:42 <daneyon_> yw
16:07:45 <adrian_otto> #topic Magnum UI Subteam Update (bradjones)
16:07:49 <bradjones> hey
16:08:06 <bradjones> another one of the detail pages has merged this week
16:08:26 <bradjones> so only one left but there are some merge conflicts on it so needs a little work
16:08:48 <bradjones> there is a problem when running magnum-ui against master horizon
16:09:01 <bradjones> I'm trying to track down the issue at the moment
16:09:19 <bradjones> works fine against horizon stable so I imagine it is just a requirement change
16:09:30 <bradjones> will post a bug shortly and get a fix out asap
16:09:45 <bradjones> that's all from me
16:09:45 <adrian_otto> ok, thanks
16:10:03 <adrian_otto> #topic Essential Blueprint Updates
16:10:25 <adrian_otto> this is the part of the agenda where owners of "Essential" blueprints update us on progress
16:10:58 <adrian_otto> #link https://blueprints.launchpad.net/magnum/mitaka Mitaka Blueprints
16:11:12 <adrian_otto> we currently have 8 of them, which is too many to cover in the allocated time
16:11:41 <adrian_otto> so what I will do it just poll the team here for a moment to see if there are any of these that deserve an update or team discussion this week
16:11:58 <adrian_otto> dimtruck: maybe you?
16:12:11 <adrian_otto> for https://blueprints.launchpad.net/magnum/+spec/magnum-tempest
16:12:26 <dimtruck> hi adrian_otto
16:12:40 <adrian_otto> I marked that one as started
16:12:48 <dimtruck> update for that is still working on plugin.  i'll have the full update next week.  it is definitely started.
16:12:51 <adrian_otto> should it be considered further along?
16:13:01 <dimtruck> is the next step "in progress"?
16:13:07 <dimtruck> (i don't really know)
16:13:22 <adrian_otto> are there other BP's in that list of 8 Essential ones that should be marked as Started as well?
16:13:38 <dimtruck> adrian_otto: good progress
16:13:43 <adrian_otto> "Good Progress" is a way to indicate it's on track for completion
16:13:55 <adrian_otto> ok, I will mark it accordingly
16:13:56 <tbh> adrian_otto, I want to discuss on https://blueprints.launchpad.net/magnum/+spec/mesos-conductor
16:13:57 <dimtruck> yes sir!  that's the one :).
16:14:11 <tbh> adrian_otto, I need some direction to get it completed
16:14:36 <adrian_otto> ok, we can do that in just a sec
16:14:40 <adrian_otto> ^^ tbh
16:14:52 <adrian_otto> any other remarks on Essential BP's before we advance?
16:15:02 <tbh> adrian_otto, sure
16:15:23 <Tango> I will mark mine as Started.  Madhuri has volunteered to help.
16:15:49 <adrian_otto> tbh, we have one to cover first, and then we will come back to yours.
16:16:07 <tbh> adrian_otto, okay
16:16:09 <adrian_otto> #link https://review.openstack.org/#/c/252779 Remove unnecessary setting of default node_count (yanghy)
16:16:32 <adrian_otto> yanghy: youre up
16:16:42 <yanghy> Hi, there's problem that
16:17:15 <adrian_otto> #link https://bugs.launchpad.net/magnum/+bug/1465097 Related bug
16:17:15 <openstack> Launchpad bug 1465097 in python-magnumclient "Wrong node-count in bay-list" [Undecided,In progress] - Assigned to Yang Hongyang (hongyang-yang)
16:17:15 <yanghy> from original bug report, it seems the magnum client pass None to api when node-count is not specified
16:17:50 <yanghy> Those 3 lines then been added to address that problem
16:18:08 <adrian_otto> ok, so how can we help today?
16:18:18 <yanghy> since now we have fixed the client, the client won't pass None to api
16:18:34 <yanghy> I thought those lines can be removed
16:18:58 <Kennan> REST API can pass NOne
16:19:18 <Tango> I think one of the comment is that client is different from API
16:19:26 <yanghy> Yes, so the question here is we need to do parameter check on API side
16:19:34 <hongbin> Kennan: I think REST API cannot pass "None" (string)
16:19:36 <adrian_otto> yes, we should
16:19:41 <vilobhmm11> yanghy : why not check at the client level itself and throw an exception if None ?
16:20:07 <yanghy> vilobhmm11, client already fixed
16:20:11 <Kennan> yanghy: did you check you code work with "None"
16:20:19 <yanghy> there's no way now that client will pass None
16:20:21 <yanghy> to api
16:20:34 <adrian_otto> we could have a func test for it
16:20:39 <Kennan> yanghy: mean REST API
16:21:16 <Tango> Can a different client pass None to the API?
16:21:17 <yanghy> REST API could pass None, just revert my client patch and ommit --node-count
16:21:21 <tcammann> I swear someone tries to "fix" this every couple months
16:21:28 <yanghy> client will pass None to magnum API
16:21:42 <wanghua> I think wsme should deal with it
16:22:09 <adrian_otto> tcammann: why does it keep coming up?
16:22:16 <yanghy> Yes, so we need more check on API side
16:23:04 <tcammann> adrian_otto: It inconsistent use of the API
16:23:38 <tcammann> But if we are adding this value to do the db, this needs to be consistent
16:24:51 <yanghy> if we keep this 3 lines, these lines is also not enough
16:25:00 <yanghy> because users can pass 0 to API
16:25:07 <yanghy> through REST
16:25:12 <tcammann> I think we can continue this discussion on the patch itself
16:25:17 <yanghy> we also need to convert 0 to 1 ?
16:26:25 <Kennan> yanghy: if i remeber clearly, you fixed way should include also address https://bugs.launchpad.net/magnum/+bug/1522668
16:26:25 <openstack> Launchpad bug 1522668 in Magnum "magnumclient not have good validation for node_count" [Undecided,New] - Assigned to Yang Hongyang (hongyang-yang)
16:26:41 <adrian_otto> ok, let's table this. To best facilitate this, I think we should be looking at a concrete proposal.
16:26:42 <Kennan> in future fix(I mean)
16:26:43 <yanghy> Kennan, yes
16:27:18 <wanghua> We should add type validation in magnumclient
16:28:01 <Tango> I think part of the of the problem is that there are different opinions about how to treat default behavior when the count is None or 0
16:28:52 <adrian_otto> yanghy: let's take this one to them mailing list so we can get a solid sense of where the inconsistencies are, and what options we have for addressing them.
16:29:11 <yanghy> adrian_otto, sure, thanks
16:29:21 <adrian_otto> that will give the team a solid basis to help drive to a decision point
16:29:40 <adrian_otto> I'm happy to bring it up again next time
16:29:44 <adrian_otto> if needed
16:29:51 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/mesos-conductor (tbh)
16:29:56 <adrian_otto> tbh: you're up
16:30:29 <tbh> adrian_otto, as discussed in ML, we decided to have container-create to support json-file
16:30:58 <tbh> and let COE-conductor will decide on the input
16:31:09 <adrian_otto> the blob needs to be the same text that the native client would expect
16:31:19 <adrian_otto> in most cases that's actually YAML, not JSON
16:31:50 <adrian_otto> we must avoid any temptation to invent a new DSL for a COE
16:32:07 <tbh> but I think python docker client does not accept JSON
16:32:32 <hongbin> Yes, re-inventing a new DSL for container-create is my concern
16:32:34 <adrian_otto> it expects YAML, right?
16:32:37 <vilobhmm11> tbh : agree
16:32:43 <tbh> adrian_otto, no it depends on COE native specifiv
16:33:09 <adrian_otto> so we should simply be passing this text blob straight through to the COE, not interpreting or parsing it in any way.
16:33:46 <hongbin> For swarm, docker CLI doesn't tak bolb, docker compose does
16:33:51 <vilobhmm11> jfyi, even in case of k8s we use YAML
16:33:57 <Kennan> seems docker and mesos not quite same for container-create
16:34:03 <adrian_otto> the rationale for this feature is so users can access advanced features of the COE without needing to implement a bunch of COE specific resources in the Magnum API
16:35:11 <hongbin> Here is my proposal. Introduce a new action to pass bolb directly to native API. Like magnum create -f FILE
16:35:25 <adrian_otto> My understanding is that most cloud operators expect to create a bay, and hand the user credentials to interact with the native API for that COE from that point forward
16:35:59 <tbh> but users will confuse with the difference between magnum create or magnum pod-create or magnum *-create
16:36:00 <adrian_otto> the only ones who care about container-* methods are those who want quotas on container creation, and related usage events
16:36:03 <vilobhmm11> hongbin : then how will the request going through magnum be different than native client
16:36:38 <adrian_otto> vilobhmm11: it coudl be subject to a rate limit or a nested quota
16:36:54 <adrian_otto> but from a functional perspective they could be equivalent
16:36:58 <hongbin> vilobhmm11: Not much different. The point is to re-use the native DSL
16:36:59 <Kennan> right now, I think invent mesos conductor seems not bring much benefits. (only we can unify COES objects interface for users)
16:38:36 <tbh> Kennan, but we don't have any interface for the user to create apps for mesos bay
16:38:45 <adrian_otto> tbh: that's okay
16:39:00 <adrian_otto> until we have a cloud operator who wants to give access to those capabilities through magnum
16:39:19 <adrian_otto> rather than through Marathon directly
16:39:31 <adrian_otto> would anyone wanting to do this please speak up now?
16:39:47 <adrian_otto> I'd really like this pursuit to be based in actual use cases.
16:39:59 <eghobo> just idea ;), DCOS has interested approach for native clients, e.g. dcos kubectl ...
16:41:09 <vilobhmm11> adrian_otto: If i understand correctly you don't want magnum api being a proxy for native client apis ?
16:41:15 <adrian_otto> eghobo: yes, but I am looking for the alternate point of view
16:41:25 <adrian_otto> vilobhmm11: I think we offer limited value there
16:41:38 <vilobhmm11> adrian_otto : Thats my point of view too
16:41:40 <adrian_otto> so why worry about this unless we are convinced someone has a good reason to use it
16:42:44 <Tango> +1, we should go by use cases
16:43:03 <hongbin> In summary, I am -1 on passing bolb through container-create, because swarm don't have a DSL for creating a single container
16:43:06 <adrian_otto> I'm all for consistency in the API, and I won't turn away good contributions, but I want us to focus on making Magnum the best combination of infrastructure and container tech, not to re-implement the wheel
16:43:26 <yanghy> adrian_otto, there's another point of view is that unified COEs just like what libvirt does
16:43:58 <yanghy> that could be another project though
16:44:07 <yanghy> not magnum's duty
16:44:20 <adrian_otto> yanghy: Yes, I'd like to find someone who wants that for a production use case. I want to understand that point of view.
16:44:36 <adrian_otto> and I'm happy to get that from another project
16:45:05 <adrian_otto> my understanding is that libcontianer =~ libvirt where container :: vm
16:45:09 <adrian_otto> if that makes sense
16:45:37 <adrian_otto> which is really out of scope for Magnum
16:46:01 <tbh> adrian_otto, Okay. can we put workflow -1 for the patches I pushed the BP?
16:46:20 <adrian_otto> the justification I expect to hear back is that someone wants to limit the rate at which containers are created, or the total number that can be created, or account for them with usage events.
16:46:40 <adrian_otto> if those are not the motivating reasons then we probably should reconsider the pursuit
16:47:28 <adrian_otto> tbh: well workflow-1 means that reviewers can be aware of this work but not consider it for merge yet
16:47:31 <adrian_otto> is that your intent?
16:47:40 <tbh> adrian_otto, yes
16:47:54 <adrian_otto> if you want to withdraw the work, then you can use "abandoned" for that
16:48:00 <adrian_otto> totally up to you
16:48:27 <tbh> adrian_otto, but I really consider to have interface for mesos COE through magnum
16:49:02 <adrian_otto> ok, thanks tbh. Let's revisit this again later. I will change my tune when I see real use cases.
16:49:27 <tbh> adrian_otto, till we get a suitable use case, we can work on another BPs I guess
16:49:38 <tbh> adrian_otto, sure thanks!
16:49:49 <adrian_otto> there were a few open discussion topics, so I will open it for that now, and call attention to each
16:49:58 <adrian_otto> #topic Open Discussion
16:50:01 <adrian_otto> Eli Qiao raise gate error http://logs.openstack.org/87/253987/1/check/gate-functional-dsvm-magnum-k8s/9eb5206/logs/screen-n-cpu.txt libvirtError: internal error: process exited while connecting to monitor: Cannot set up guest memory 'pc.ram': Cannot allocate memory
16:50:24 <adrian_otto> eliqiao: yt?
16:50:59 <adrian_otto> Related:
16:51:06 <adrian_otto> #link https://bugs.launchpad.net/magnum/+bug/1521237
16:51:07 <openstack> Launchpad bug 1521237 in Magnum "Gate occasionally failed with nova libvirt error" [Undecided,New]
16:51:23 <adrian_otto> #link https://review.openstack.org/#/c/254370/ Log the failure on elastic-recheck
16:51:55 <adrian_otto> Any thoughts on this? If not, we have one more to hit today.
16:52:10 <adrian_otto> ok, here is the other one:
16:52:12 <adrian_otto> wanghua raise "How to improve docker registry in magnum?"
16:52:22 <Kennan> is it nova bug? or libvirt ?
16:52:24 <vilobhmm11> shouldn't this be opened against nova ?
16:52:31 <vilobhmm11> Kennan : +1
16:52:51 <vilobhmm11> if its that frequent
16:53:04 <wanghua> It seems it is caused by lack of memory.
16:53:04 <vilobhmm11> let me file a bug for nova…
16:53:06 <adrian_otto> keep in mind that a bug can reference multiple project if it's not clear where it's happening
16:53:11 <hongbin> Kennan: vilobhmm11 It doesn't seem so. Basically, the problem is that our gate is running out of memory
16:53:14 <adrian_otto> so just add nova to the existing bug
16:53:25 <adrian_otto> it can be removed later if this turns out to be a Magnum issue
16:53:48 <vilobhmm11> hongbin : ok..let me check the logs in detail…adrian_otto: sure
16:54:28 <adrian_otto> sounds good
16:54:30 <yanghy> Kennan, vilobhmm11 , hongbin , JFYI libvirt code in nova is under heavy refactor currently
16:54:40 <Kennan> hongbin: I think nova guys can give some valuabel hints what is that issue
16:54:59 <Kennan> if they thought not related, it maybe magnum part
16:55:13 <houming> adrian_otto Do we have enough time to disscuss this one? #link https://review.openstack.org/#/c/252222/  (Add fixed_subnet to Baymodel Object)   Bug here: #link https://bugs.launchpad.net/magnum/+bug/1519748
16:55:13 <openstack> Launchpad bug 1519748 in Magnum "bay-creation return 400 Bad Request with a valid fixed-network in baymodel" [Undecided,In progress] - Assigned to Hou Ming Wang (houming-wang)
16:56:24 <adrian_otto> houming: sure, for a couple of min
16:57:25 <houming> The 'fixed_network' actually means 'fixed_network_cidr', or more precisely 'fixed_subnet'. which way is better to fix this?:  1. Rename the 'fixed_network' to 'fixed_subnet', in Baymodel DB, Baymodel Object and MagnumClient.  2.  Leave 'fixed_network' alone, add a new field 'fixed_subnet' to Baymodel, and use the 'fixed_subnet' to take the place of current 'fixed_network'.
16:58:09 <Tango> I vote for #1, simpler to implement and to the point
16:58:21 <adrian_otto> this is a contract breaking change
16:58:35 <adrian_otto> so it should really be in a new API version
16:59:07 <adrian_otto> I don't like the idea of having multiple attributes for doing essentially the same thing
16:59:20 <adrian_otto> we are approaching the end of our scheduled time
16:59:55 <adrian_otto> our next team meeting will be on Tuesday 2015-12-15 at 1600 UTC. See you then. It may be the last meeting of the year, resuming again in January, depending on interest.
16:59:58 <Kennan> we discuss in that patch
17:00:06 <Kennan> can discuss
17:00:15 <juggler> thank you for coordinating adrian
17:00:19 <adrian_otto> Kennan: I'll comment there as well, thanks.
17:00:25 <adrian_otto> you're welcome juggler
17:00:33 <adrian_otto> thanks everyone for attending
17:00:36 <juggler> and thanks all for your participation
17:00:40 <adrian_otto> #endmeeting