16:03:05 #startmeeting containers 16:03:06 Meeting started Tue Dec 15 16:03:05 2015 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:03:07 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:03:10 The meeting name has been set to 'containers' 16:03:10 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-12-15_1600_UTC Our Agenda 16:03:18 #topic Roll Call 16:03:21 Adrian Otto 16:03:24 o/ 16:03:30 o/ 16:03:31 o/ 16:03:31 o/ 16:03:36 o/ 16:03:37 o/ 16:03:44 o/ 16:03:52 o/ 16:03:54 Perry Rivera 16:03:56 o/ 16:04:01 o/ 16:04:14 good evening 16:04:21 helo madhuri, hongbin, houming, Kennan, dane, rpothier, juggler, bradjones, vilobhmm11, eghobo, tcammann 16:04:22 o/ 16:04:33 I am speaking SMTP protocol apparently 16:04:38 hello!! 16:04:40 good localtime adrian! 16:04:53 lol 16:05:08 ok, let's begin 16:05:14 #topic Announcements 16:05:21 1) Changing our release type 16:06:07 we currently use the release:independent release model 16:06:24 I plan to transition us to the release:cycle-with-intermediary model instead 16:06:42 the key difference is that rather than just cutting releases whenever... 16:07:06 we still do that, but the OpenStack release team also picks one of those releases to include it with the names releases 16:07:28 so the one we release closest to the milestone events gets included with OpenStack 16:07:59 I plan to submit a change proposal to the projects.yaml file in the openstack/governance repository to indicate this change 16:08:13 any thoughts on this, or input I should consider prior to moving forward with this? 16:08:48 lgtm 16:09:03 ok, If you think of anything of concern, let me know before Jan 21 16:09:22 adrian_otto: what's the impact to developers? 16:09:26 that's the drop dead date for reverting this for Mitaka if I move forward and we change our minds 16:09:48 Kennan: I am not aware of any impact. This is an administrative change. 16:10:01 ok 16:10:36 2) December meetings 16:10:51 this is the last team meeting I had scheduled for December 16:11:10 I expect attendance to be rather sparse for the next two weeks 16:11:11 yay to the holiday break 16:11:11 adrian_otto : what advanatage this brings over the current model we follow ? Or is it a madatory change from for every project ? 16:11:46 vilobhmm11: this adjustment is optional, but it's a way to include Magnum in the OpenStack milestone releases more officially 16:11:58 and to maintain the stable branches in lock step with other projects 16:11:58 adrian_otto : ok 16:12:42 so if we did hold more team meetings in December they would fall on Dec 22 and Dec 29 16:12:56 I suggest we resume on Jan 5 16:13:04 what do you all think? 16:13:07 +1 on 1/5 16:13:16 +1 16:13:20 wish you guys good holiday 16:13:23 +1 16:13:23 +1 16:13:23 +1 16:13:27 +1 16:13:37 +1 16:13:41 #agreed our next team meeting will be 2016-01-05 at 1600 UTC 16:13:56 any more announcements from team members? 16:14:56 #topic Review Action Items 16:15:00 (none) 16:15:17 #topic Container Networking Subteam Update (daneyon) 16:15:27 here is the log from last week's meeting 16:15:30 #link http://eavesdrop.openstack.org/meetings/container_networking/2015/container_networking.2015-12-10-18.01.html 16:15:55 quite a bit of discussion related to the image building spec 16:15:57 #link https://blueprints.launchpad.net/magnum/+spec/fedora-atomic-image-build 16:16:15 if yolanda is available, i would like to talk with her after the meeting 16:16:56 we also had good discussion related to the continerize kube services spec 16:17:27 As an action, I talked with the kube communtiy to get a better understanding about reasons to containerize flannel and etcd 16:17:54 My position is that we do not containerize etcd/flannel b/c of the 2 docker daemon requirement 16:18:20 I believe the complexity/operational overhead outweighs the benefits of containerizing these 2 services 16:18:23 you don't actually need 2 deamons 16:18:31 ? 16:18:37 you can expose the docker sock file 16:19:08 has anyone tested that yet? 16:19:10 so you allow the utility containers access to thedocker daemon on the host that way 16:19:18 yes, we do that. It works. 16:19:28 Interesting 16:19:55 adrian_otto: no you need 2 daemons, somebody need to run flannel container 16:19:57 you just need to set your expectations that when you do that, your container is not a security isolation facility at all 16:19:59 adrian_otto: do you mean share host mnt namespace ? or something else ? 16:20:24 but you are using the container image as a mechanism for more convenient maintenance and customization of that utility environment 16:20:46 use -v to share the socket file. 16:21:01 it's actually evasively simple. 16:21:12 seems kolla have some experience like libvirt related, share the host (if I remember clearly) 16:21:25 maybe can find some helpful info from that 16:21:43 adrian_otto I am open to containerizing etcd/flannel if it does not overcomplicate things 16:21:53 Seems like your idea is worth exploring 16:22:00 sdake offered to assist with expertise from his team to help with this, if desired 16:22:30 I don't want a setup that's overly complex either 16:23:00 adrian_otto could you add feedback to my message on the ML or on the review? 16:23:06 but it's worth some exploration to see if we can strike a balance there, as most who are adopting Magnum are struggling with bay node image customization 16:24:20 adrian_otto: could you elaborate the 'the struggle with bay node image customization' 16:24:32 they don't know how to build the images 16:24:35 Until we can get these services contaienr ized and the DIB images implemented, we need to update the Atomic image 16:24:45 A. To address a vxlan bug in the current code 16:24:45 it's not clear who they belong to 16:25:00 B. If we go to F21, the image is ~ 25% smaller. 16:25:07 those cloud operators are not familiar with the tools we use 16:25:43 so if we ould use docker images to allow changing those components, it may reduce the friction there. 16:26:44 adrian_otto: docker images seems still need some customization(heat need do that) right? 16:27:01 there has also been repeated requests for bay nodes based on an ubuntu or debian derivative 16:27:19 Kennan: yes 16:27:23 does anyone have a better understanding why a cloud operator wants to create an image instead of using our atomic-5 image? 16:28:07 daenyon: one reason is they want to use Magnum on bare metal nodes 16:28:13 seems Tango is working on build new image, he is working on docs if process is OK 16:28:33 which requires fooling with the images depending on how they implement bare metal servers (with ironic or otherwise) 16:29:04 Yes, also they might want to ship a bay with customization 16:29:19 adrian_otto I thought we were waiting for the Iromic team to add neutron ports so we can start supporting bare metal nodes. 16:29:29 others have an apparently religious opposition to any component that's a RedHat distribution of Linux (rational or not, these preferences are strong) 16:29:58 I am concerned adding Ubuntu, Debian to the mix 16:30:12 daneyon: we are, but in the mean time, downstream consumers want to make that work 16:30:31 adrian_otto: build images maybe OK if our guide or tools is applicable and easy, I think, like some cloud, adminitstraotr build images themselves 16:31:02 sometimes cloud operators want to bake in things like pre-configured telemetry and fault monitoring 16:31:21 ok, that makes sense. 16:32:24 alright let's wrap the network subteam update 16:32:45 any more to add, daneyon? 16:33:13 #topic Magnum UI Subteam Update (bradjones) 16:33:23 hey 16:33:40 quite a few reviews out that could do with a look over 16:33:49 nope 16:33:50 I still have a couple of older ones to look at this evening 16:33:58 there is one discussion 16:34:16 #link https://review.openstack.org/#/c/256358/ 16:34:42 about moving the dashboard to a panel group under the project dashboard in horizon 16:35:04 it makes no functional difference will just be a different place for users to go to 16:35:24 so if anyone has any strong objection to that then leave a comment on that review 16:35:34 thanks Brad 16:35:47 any more remarks on Magnum UI? 16:35:55 not from me 16:36:09 Essential Blueprint Updates 16:36:14 ok, we have 4 to cover 16:36:29 I am drawing these from here: 16:36:38 #link tps://blueprints.launchpad.net/magnum/mitaka Mitaka Magnum Blueprints 16:36:45 corrections 16:36:58 #undo 16:36:59 Removing item from minutes: 16:37:09 irc://chat.freenode.net:6667/#link https://blueprints.http://launchpad.net/magnum/mitaka Mitaka Magnum Blueprints 16:37:16 sigh 16:37:26 #link https://blueprints.http://launchpad.net/magnum/mitaka Mitaka Magnum Blueprints 16:37:30 ok, here we go. 16:37:42 thanks adrian! 16:37:45 #Link https://blueprints.launchpad.net/magnum/+spec/magnum-tempest (dimtruck) 16:37:56 #link https://blueprints.launchpad.net/magnum/+spec/magnum-tempest (dimtruck) 16:38:20 dimtruck: you here? 16:38:23 hi :). i am 16:38:30 any update on this? 16:38:36 so the update is that the plugin patch is passing 16:39:03 great! 16:39:08 and thanks to hongbin i have a much better implementation. I'm changing the description of the patch and it'll be ready to review/merge 16:39:13 (taking it out of WIP) 16:39:26 thanks for your continued work on this. 16:39:34 there are also 25 tests in patch review for magnum bay CRUDs 16:39:50 and i also have CA CRUDs ready to go once those are merged :) 16:40:08 once jenkins is happy :) 16:40:11 Thanks dimtruck. Next we have a celebration of an implemented feature: 16:40:20 #link https://blueprints.launchpad.net/magnum/+spec/swarm-functional-testing (eliqiao) 16:40:32 complete! 16:40:36 * adrian_otto applause 16:40:58 Next we have a started one: 16:41:06 #link https://blueprints.launchpad.net/magnum/+spec/resource-quota (vilobhmm11) 16:41:12 vilobhmm11: any update on this? 16:41:29 adrain_otto : proposed design on ML http://lists.openstack.org/pipermail/openstack-dev/2015-December/082266.html 16:41:52 would request everyone to have a look and let me know if that sounds fair 16:42:04 initially plan is to impose quota on bay creation 16:42:18 by restriting user of a project with certain number of bays to create 16:42:25 and then proceed with other resources 16:42:43 thanks vilobhmm11 16:43:06 I am marking Design as "Discussion" status 16:43:07 adrian_otto : thats it from my side…will continue the discusssion on ML 16:43:19 adrian_otto : sure thanks! 16:43:28 done 16:43:42 and last we have: 16:43:52 #link https://blueprints.launchpad.net/magnum/+spec/split-gate-functional-dsvm-magnum (eliqiao) 16:43:59 eliqiao: any update? 16:44:12 I know we have been struggling with OOM errors 16:44:39 I can give an update for the memory issue 16:44:50 hongbin, one sec 16:45:06 #topic Open Discussion 16:45:16 Hongbin raise gate error http://logs.openstack.org/87/253987/1/check/gate-functional-dsvm-magnum-k8s/9eb5206/logs/screen-n-cpu.txt libvirtError: internal error: process exited while connecting to monitor: Cannot set up guest memory 'pc.ram': Cannot allocate memory 16:45:23 ok, hongbin, proceed! 16:45:35 I have talked to the infra team 16:45:55 They will investigate why some nodes has less memory than others 16:46:06 I added a item to the agenda, probably need a refresh :) 16:46:32 tcammann: , you are welcome to raise it after the next 16:46:37 At the same time, we could investigate how to reduce memory consuption for our gate job 16:46:40 hongbin: thanks for driving that one 16:46:44 #link https://bugs.launchpad.net/magnum/+bug/1521237 Gate occasionally failed with nova libvirt error 16:46:44 Launchpad bug 1521237 in Magnum "Gate occasionally failed with nova libvirt error" [Critical,New] 16:47:22 That is it from my side 16:47:28 hongbin reported this one, but the work is unassigned 16:47:42 I can assign it myself 16:47:46 hongbin: did you want to cover this one? 16:48:02 this is still the same topic 16:48:08 ok, nm. Thanks! 16:48:31 wanghua: you here? 16:48:46 I have a note from you on the agenda: 16:48:48 wanghua raise "How to improve docker registry in magnum?" 16:49:11 if not, tcammann it's all yours then 16:49:19 I can raise this issue on behalf 16:49:28 ok hongbin 16:49:43 Basically, right now docker registry listen to localhost 16:49:45 Midcycle?! 16:50:04 It is easy to pull docker images from private registry 16:50:10 yes, Jan-Feb is the timeframe for that 16:50:11 But it is hard to push 16:50:45 Because users need to ssh into the node and do the push, which is inconvience 16:50:46 I'm seeking any sponsors who would like to host our midcycle 16:51:02 hongbin: acknowledged 16:51:30 I could probably get HP to host it in Sunnyvale, or Bristol (UK) 16:51:36 hongbin: I view this as a bug in Swarm 16:51:38 s/HP/HPE/ 16:51:53 but the workaround is you do a push to the swarm endpoint, followed by a pull 16:52:00 and then all nodes get it from the registry 16:52:22 but that's icky. it should just work when you do a push 16:52:42 Yes it is 16:53:11 tcammann: thanks for that. 16:53:17 team, how many contributors do we have in Europe that would attend in Bristol? 16:53:32 or who would be able to approve travel to attend? 16:54:03 hongbin: we have the same issue in k8s and swarm, right? 16:54:36 tcammann: if you would check on the Sunnyvale option, we can expect strong attendance there. 16:54:40 adrian_otto: Yes, as long as the registry listen to localhost, we have the issue 16:54:41 hongbin: for private registry, not quite get, you use docker pull --private-url, and docker push not work for that ? 16:55:43 I have a comment on heat template refactor 16:56:11 Kennan: because the --private-url is localhost 16:56:40 rpothier: we are in open discussion, so you may blurt out anything you'd like 16:56:42 why we bind to localhost, is that bp design like that? 16:56:57 seems we need expose url can be accessible 16:57:08 there is a spec in heat to use conditionals, as an alternative to Jinja 16:57:11 Kennan: we took that as a best practice 16:57:20 Kennan: It listen to localhost to avoid TLS complexity 16:57:27 because the registry is back ended on swift 16:57:33 so might as well run it on all bay nodes 16:57:52 and then you don't need to worry about public exposure of the private registry host 16:57:53 I remember setup private registry before, seems OK.(for TLS , not quite dig it) 16:58:44 it is time to wrap up our meeting now, with just a minute remaining 16:58:51 any final remarks? 16:58:58 adrian_otto: I recalled you want to mid-cycle right after Ironic mid-cycle? 16:59:23 Hi all.. I am new to this and I like to contribute to Magnum.. Any suggestions.. 16:59:26 hongbin: I will cover midcycle planning on our ML 16:59:36 #action adrain_otto to plan midcycle by ML 16:59:41 #undo 16:59:41 Removing item from minutes: 16:59:45 cooldharma06 join openstack-containers IRC 17:00:01 #action adrian_otto to plan midcycle by ML 17:00:06 our next team meeting will be 2016-01-05 at 160 UTC. See you all then. Happy holidays! 17:00:10 #endmeeting