16:02:50 #startmeeting containers 16:02:51 Meeting started Tue Dec 1 16:02:50 2015 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:02:52 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:02:55 The meeting name has been set to 'containers' 16:03:16 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-12-01_1600_UTC Our Agenda 16:03:47 #topic Roll Call 16:03:51 Adrian Otto 16:03:56 o/ 16:03:57 o/ 16:03:58 o/ 16:03:58 sorry for the delay. 16:03:58 o/ 16:03:59 o/ 16:03:59 o/ 16:04:00 o/ 16:04:01 o/ 16:04:02 o/ 16:04:06 o/ 16:04:07 o/ Perry Rivera 16:04:08 o/ 16:04:17 o/ 16:04:18 o/ 16:04:21 o/ 16:05:05 hello wanghua, Drago, Kennan, dimtruck, dane, hongbin, houming, rpothier, daneyon, rods, juggler, Tango, eghobo, muralia, and suro-patz 16:05:26 Hi everyone 16:05:46 hello all 16:05:48 #topic Announcements 16:05:52 1) Blueprints have been targeted for Mitaka. 16:06:05 If you disagree with any of the selections, please see adrian_otto. 16:06:07 adrian_otto I have to leave in a few minutes to attend a customer meeting. 16:06:18 daneyon: ok 16:06:21 If you are unable to commit to implementing the blueprints, or finding contributors who can, let's un-target them from Mitaka, or get them re-assigned. 16:06:33 Several blueprints need further detail or team discussion to be considered. Priority values for these have been temporarily set to "Not" pending clarity. 16:06:54 we will have a chance to discuss a few of those today 16:07:04 for the ones we do not get to, let's plan to use our ML 16:07:12 any other announcements from team members? 16:07:32 #topic Review Action Items 16:07:35 1) adrian_otto to review https://etherpad.openstack.org/p/mitaka-magnum-planning and use it to target Mitaka blueprints 16:07:39 Status: COMPLETE 16:07:45 that was the only action item. 16:07:51 Container Networking Subteam Update (daneyon) 16:08:10 daneyon: maybe you can make a quick update before you take off? 16:08:15 sure 16:08:23 #link http://eavesdrop.openstack.org/meetings/container_networking/2015/container_networking.2015-11-19-18.01.html 16:08:29 ^ of last week's meeting 16:08:44 I created a support matrix for the CNM 16:08:49 one sec and I'll get it 16:09:18 #link https://wiki.openstack.org/wiki/Magnum/NetworkDriverMatrix 16:09:22 ^ network rivers 16:09:32 #link https://wiki.openstack.org/wiki/Magnum/LabelMatrix 16:09:34 ^ labels 16:10:01 I separated the 2 b/c i think labels can be applicable beyond the CNM. 16:10:22 Tango is working on option 2 of this patch 16:10:24 #link https://review.openstack.org/#/c/241866/ 16:10:34 to provide a non-overlay option for the flannel driver. 16:10:57 thanks daneyon 16:11:08 wanghua is working on containerizing k8s services, including etcd and flannel 16:11:09 #link https://blueprints.launchpad.net/magnum/+spec/run-kube-as-container 16:11:16 that's about it 16:11:20 any questions? 16:11:26 cool 16:11:45 egor don't support to run flannel in container 16:11:55 ok, we appreciate the update daneyon 16:12:00 :-) 16:12:23 wanghua: if we only use the chroot (mount) namespace, that would still be better than nothing. 16:12:23 wanghua let's continue that discussion on the ML 16:12:39 i think their is pro's and con;s to both approaches 16:12:57 OK 16:12:59 I am confident that we can make it work, even if it requires some compromises 16:13:11 #topic Magnum UI Subteam Update (bradjones) 16:13:19 brad sent his regrets, and will attend next week 16:13:23 nothing to share this time 16:13:33 #topic Blueprint/Bug Review 16:14:16 next week I will begin the "Essential Blueprint Updates" agenda item, but I will skip it this week, since the owners have not been given sufficient time to prepare an update. 16:14:43 if you own a blueprint marked Essential, please arrange to have a breif update to share with the team each week here in our team meetings about that work 16:14:52 Blueprint Discussion 16:15:01 #link https://blueprints.launchpad.net/magnum/+spec/magnum-baremetal-full-support Support baremetal container clusters (adrian_otto, kennan) 16:15:15 ok, so this one I really struggled with, Kennan 16:15:28 I do see baremetal support for Magnum as a top priority, but... 16:15:43 adrian_otto: it was planed before summit. right now I did not intend to make it M release 16:15:54 as ironic neutron has gaps 16:15:56 pursuant to our discussions with Ironic devs in Tokyo, I'm reluctant to put baremetal workarounds directly into Magnum when they belong in Ironic. 16:16:12 ok, so are you okay with the way it is currently marked? 16:16:19 should I mark it Obsolete? 16:16:34 yes I think it can be in next release. 16:16:39 our current plan of record is to await the expected solution of this in Ironic 16:17:01 ok, then I will leave it the way it is, and we can re-prioritize it next time 16:17:13 sure 16:17:27 that was the only blueprint in the request list for Mitaka that was not included. All the others were. 16:17:47 I expect that we will need to prune the Mitaka target list more as we go 16:18:13 just because I targeted a blueprint for the release does not mean you are committed to complete it… unless it's an Essential or High priority. 16:18:20 if that's the case, let's talk about them 16:18:31 here is one: 16:18:32 #link https://blueprints.launchpad.net/magnum/+spec/versioning-rpc-server Versioning rpc server and client (eliqiao) 16:18:44 eliqiao: you requested discussion on this 16:19:23 this is one that I set to "Not" priority pending team discussion 16:19:59 keep in mind that if I mark a blueprint "Not", it does not mean I am rejecting it. For that action I use the "Obsolete" design action. 16:20:32 "Not" priority means I have reviewed it, and need more input before giving it a different priority, or taking another administrative action. Basically I am treating this as "Pending" 16:21:01 eliqiao: yt? 16:21:23 we can come back to this one later. 16:21:28 #link https://blueprints.launchpad.net/magnum/+spec/magnum-tempest Magnum APIs Test coverage in Tempest Lib (dimtruck) 16:21:41 Implementation: #link https://review.openstack.org/247083 Tempest plugin work (dimtruck) 16:21:53 dimtruck: you requested discussion on this one 16:22:15 sure - it was more of the approach question - i think we covered it last week? 16:22:24 ok, moving to the next 16:22:28 but i'm planning on rebasing it and pushing new changes today 16:22:29 thanks 16:22:46 #link https://blueprints.launchpad.net/magnum/+spec/support-for-different-docker-storage-driver Docker storage drivers (wanghua) 16:23:21 wanghua: you requested discussion on this one? 16:23:27 yes 16:23:59 I register this bp a hour ago and I think it is necessary 16:24:16 today alternate storage drivers can be selected by building an alternate glance image for your bay node with the modified config 16:24:45 is there a reason this needs to be part of Magnum? 16:25:09 might it be acceptable to document the process of how to modify the image? 16:25:41 I think not only modify the image 16:25:53 adrian_otto: it's config parameter for daemon 16:26:32 eghobo: yes, that's why I'm struggling to understand why Magnum has a role here 16:27:11 wanghua: honestly there is not many choices yet ):, e.g. DeviceMappper is only option for RHEL based 16:27:35 wanghua: could you elaborate, what is your end goal? 16:27:45 Maybe there is a case where building the custom image is not sufficient? 16:27:47 my current position is that our various COE's and constituent components (such as Docker) have a wide range of configuration options, and it would be impractical to implement setting all of them from Magnum. 16:27:48 seems aufs is used many in docker part 16:28:15 so we should carefully consider which ones we add support for, and why. 16:28:25 it seems we can support most populated used driver, not increase so many drivers now 16:28:41 I think or magnum need stable and scale, not include so many drivers 16:28:44 Kennan: that sounds sensible to me. 16:29:27 wanghua: can you explain a bit more about why you feel this is an important addition? 16:29:50 Kennan: most suitable driver usually part of OS package ;) 16:29:55 I want to be sure we fully consider that point of view before making any decision 16:30:15 storage and network are important to docker 16:30:51 we have different network drivers, why we don't have storage drivers 16:30:52 yes, as we have one image now, like fedora eghobo: or we have ubuntu, we'd better have a standard implementaion 16:31:27 and further neededs drivers usually not need to implement now. like neutron, and ironic, they have separate drivers to maintain for later needs 16:31:36 Kennan: sorry we cannot, aufs doesn't work at RHEL ): 16:32:04 yes, I just use aufs as example, not restrict to RHEL 16:32:31 my point is we have only one image now, we need so many drivers now 16:32:36 why 16:32:47 typo s/we/why 16:32:55 Kennan: overlayfs maybe this command ground but it has long way to go 16:33:13 I lean to have one storage driver now, unless someone who is using magnum asks for another 16:33:27 +1 hongbin 16:33:34 +1 16:33:41 +1 16:33:55 ok, so I think this idea has some merit, and we should revisit it at a later time 16:33:59 wanghua: additional driver increase the maintain cost 16:34:03 we can add daemon option it doesn't sound as big task 16:34:15 if users start asking for this, that will elevate our relative priority for this 16:34:41 ok, we can revisit it at a later time 16:34:58 wanghua: I suggest adding an outline to explain the planned implementation, and a level-of-effort estimate, and place that in the description. 16:35:21 ok 16:35:30 and we can come back to this when we have both 1) A user seeking this, and 2) A developer willing to add it 16:35:34 fair enough? 16:35:35 +1 hongbin 16:35:53 sounds ok adrian_otto 16:36:14 adrian_otto, do you know any company is using magnum now? 16:36:51 there are several who are in pre-release 16:37:09 but I have some information that's restricted by NDA 16:37:23 so I prefer that magnum users identify themselves rather than me 16:37:34 adrian_otto: and what the primary COE for them? 16:37:56 I can tell you that Rackspace Carina has chosen Docker Swarm as its first COE 16:38:13 Carina by Rackspace is the correct product name, sorry. 16:38:25 oh, cool. 16:38:37 Kubernetes seems to be the more popular one under review. 16:38:55 with Mesos as a close third to Docker Swarm 16:39:06 From google: eBay plans to make use of the Magnum plug-in for OpenStack :) 16:39:24 the ones who like Mesos tend to be the ones with large scale data analysis workloads 16:39:34 houming: sounds ok. any link ? 16:40:14 houming: I don't think this information is correct 16:40:21 Subtopic: Review Discussion 16:40:27 #link https://review.openstack.org/250999 Add docker-bootstrap (wanghua) 16:40:51 hongbin flagged this one as workflow-1 16:41:07 hi adrian_otto sorry for interupt, are we done Versioning rpc server? 16:41:32 Yes, I want to make sure this is the right direction before working on the implementation 16:41:44 ok, I did I skip that? 16:41:54 ok, let's jump back to that next. 16:42:06 eliqiao: I will revisit that 16:42:23 adrian_otto: okay, thx. 16:42:38 hongbin: anything we should discuss now regarding review 250999? 16:42:53 I agree with eghobo: better not container related to make complicated 16:43:03 wanghua: did you request discussion on this one? 16:43:07 Yes, it is related to containerize flannel and daneyon is not here 16:43:21 wanghua: eghobo could we discuss it in ML? 16:43:24 although not dig much, but I thik something is not needed to be containerized 16:43:24 ok, we can include it for next week 16:43:29 I will take an action for that 16:43:42 sure 16:44:07 #action adrian_otto to raise "#link https://review.openstack.org/250999 Add docker-bootstrap (wanghua)" for discussion in next week's agenda 16:44:23 or you can discuss in #openstack-containers or ML in the mean time to work it out 16:44:43 I will start a ml later 16:44:43 #link https://blueprints.launchpad.net/magnum/+spec/versioning-rpc-server Versioning rpc server and client (eliqiao) 16:45:05 thanks wanghua 16:45:12 I am not sure if it is too early to push versioning to rpc server. 16:45:17 wanghua: sure I already put all my thoughts in email, you can agree with it or not ;) 16:46:04 eliqiao: I see you submitted a review on this: 16:46:14 #link https://review.openstack.org/247308 Add conductor manager to handle all RPC handler 16:46:26 i still believe it better to put effort for Ubuntu and CoreOS ;) 16:46:55 ok, the blueprint is currently in "Not" status 16:47:12 I'm willing to leave it that way if we agree there are more important things to attend to first 16:47:15 adrian_otto: can I know somthing about why? 16:47:35 adrian_otto: okay, seems we'd focus on feature enable on conainers. 16:47:49 eliqiao: because the review did not have any +1 votes, and you requested discussion on it. I did not want to prioritize it without your input first. 16:48:20 adrian_otto: hmm. actually I'd like to make some rpc call async, so the behavior may changes for some of the rpc api call. 16:48:46 I think async conductor work for scalability should happen too, but I'm not sure which of these should actually come first. 16:49:05 as having the versioning in there would allow us to migrate to the next implementation, right? 16:49:54 I think it is better to review the async proposal first 16:50:04 so I could set https://blueprints.launchpad.net/magnum/+spec/async-rpc-api to depend on https://blueprints.launchpad.net/magnum/+spec/versioning-rpc-server 16:50:18 adrian_otto: if we don't have versioning, and we change the rpc behavior, then how to identify which methhod is called? 16:50:23 and match the priority 16:50:38 eliqiao: yes, that's a legitimate concern 16:50:46 Do we plan to keep both async and non-async RPC? 16:51:05 Tango: no, we want to replace sync with async 16:51:24 if async is better, why need both. have same question as Tango 16:51:29 because you can simulate sync behavior with an async client that polls and blocks 16:51:36 but you can't do the reverse 16:52:03 So for now, is it OK to just move to async, despite the change in behavior? 16:52:23 a little disruptive, but we don't want non-async anyway 16:52:33 my gut says yes, but I'm open to alternate viewpoints 16:53:02 the only thing that uses our RPC API is Magnum itself 16:53:13 so I don't think we risk breaking third party software 16:53:20 is that true? 16:53:28 if not break funtional too often. is that concern eliqiao ? you want two versions to make sure easily migrate to async 16:53:28 ? 16:53:31 adrian_otto: yes, I think so 16:53:39 ok, I'm going to lave this alone 16:53:55 we have a couple of quick things to touch on before open discussion 16:53:58 if you have 2 conductors at same time? I am considering the upgrade process. 16:54:21 eliqiao: good point 16:54:31 depends on LOE 16:54:35 #link https://review.openstack.org/225400 Add registry_address to bay db and api (wanghua) 16:54:41 let's wrap up this one 16:55:24 wanghua: any remarks on this one? 16:55:27 Do we need to tell the user the url of docker registery 16:55:44 or hardcoded it to localhost:5000 16:55:53 wanghua: is that for private registery case ? 16:56:05 Kennan: yes 16:56:14 wanghua: if you have a registery inside of k8s cluster, then where is the image from? 16:56:47 eliqiao: now you should make docker images in the bay and upload them to docker registery. 16:57:24 sorry to queeze this one on time 16:57:26 wanghua: hmm... if user want to add new images to registery, how will they do if you don't expose the url of registery? 16:57:33 #topic Open Discussion 16:57:45 seems can discuss in ML or IRC in contianers 16:57:53 welcome to continue the 225400 discussion if desired 16:58:09 just have a couple of minutes remaining for miscellaneous items 16:58:23 wanghua: any concern not to expose registery url? 16:58:41 I think it is necessary 16:59:33 our next meeting will be in #openstack-meeting-alt on Tuesday 2015-12-08 at 1600 UTC. See you all then! 16:59:41 thanks for attending today! 16:59:49 bye 16:59:49 #endmeeting