16:00:57 #startmeeting containers 16:00:58 Meeting started Tue Jul 7 16:00:57 2015 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:59 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:01 The meeting name has been set to 'containers' 16:01:05 #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2015-07-07_1600_UTC Our Agenda 16:01:10 #topic Roll Call 16:01:14 Adrian Otto 16:01:29 Perry Rivera 16:01:30 Andrew Melton 16:01:31 Rob Pothier 16:01:33 Ton Ngo 16:01:33 Dane LeBlanc 16:01:34 yar \o 16:01:39 o/ 16:01:44 o/ 16:01:47 o/ 16:01:52 Digambar Patil 16:01:57 Thomas Maddox 16:01:58 o/ 16:02:03 o/ 16:02:41 hello juggler, apmelton, rpothier, Tangom dane_leblanc, sdake_, jjlehr, bradjones, daneyon, diga, thomasem, hongbin, and mfalatic 16:03:25 I was hoping to see madhuri today as well 16:03:50 #topic Announements 16:04:08 jjfreric joining 16:04:10 1) We will be releasing liberty-1 today 16:04:16 hello jjfreric 16:04:21 hello 16:04:28 hello 16:04:40 2) Our magnum-ui-core team has been established: 16:04:49 #link https://review.openstack.org/#/admin/groups/970,members Initial Members 16:04:56 o/ 16:04:57 The group has the initial list of reviewers in it. 16:05:07 hello tcammann and eghobo 16:06:07 The magnum-core group is linked to the magnum-ui-core group, but there is no expectation that members of magnum-core will review all the code in that repo. 16:06:26 we are trusting the magnum-ui-core group to carry most of that responsibility. 16:06:32 Questions on this? 16:07:00 yay ui :) 16:07:07 3) I am back form vacation, returning to full duty again this week 16:07:38 doing some catch-up. 16:07:45 wb 16:07:47 any other announcements form team members? 16:07:53 tcammann: tx 16:07:57 midcycle 16:08:18 yes, we have a poll for midcycle participation 16:08:18 idont now if this is an announcement or something to discuss during the meeting 16:08:19 anyone have that link handy? 16:08:21 * juggler is playing catchup as well 16:08:29 00~http://doodle.com/pinkuc5hw688zhxw01~ 16:08:29 adrian_otto sec i'll find link 16:08:53 uh mangled clipboard sorry 16:09:05 #link http://doodle.com/pinkuc5hw688zhxw Midcycle participation poll 16:09:10 ya tcommann on ball :) 16:09:31 tango is hosting at ibm facilities 16:09:40 thanks Tango and IBM! 16:09:50 Glad to help 16:09:59 sorry I'm late 16:10:10 hello apuimedo 16:10:33 hi, all, Xu from Hyper 16:10:34 ok, so please respond to the poll so we can select a date and plan for that 16:10:43 hi xu_ 16:11:07 daneyon: I meant to ask you for this in advance so you could prepare 16:11:14 sorry for springing it 16:11:19 #topic Container Networking Subteam Update 16:11:27 adrian_otto no worries 16:11:29 we held an initial meeting last Thu 16:11:41 nothing much new to report. I was on vacation last week and training this week. 16:11:50 #link http://eavesdrop.openstack.org/meetings/container_networking/2015/ Previous Meetings 16:12:09 Please review the previsou meeting and ping me on irc if you have any quesitons. 16:12:16 #link http://eavesdrop.openstack.org/meetings/container_networking/2015/ 16:12:43 ok, so each week, I'd like to get a 1-2 minute update from the subteam here so the rest of the group can have an idea about what's happening there at a high level 16:12:44 It may be best not to rehash everything from last week's update 16:13:02 right, we have the transcript for those who want to dive in 16:13:15 #topic Review Action Items 16:13:19 adrian_otto that is what i'm planning for. I would normally have news, but with vacation and training not much. 16:13:29 1) madhuri to kick off thread about cert provided by stacforge project anchor 16:13:36 Status? 16:13:44 anyone particiapting in the subteam, pls continue to add content to the EP 16:13:48 adrian_otto should we respond to the survey if we can attend remotely? 16:13:50 #link https://etherpad.openstack.org/p/magnum-native-docker-network 16:14:12 juggler: yes, respond anyway, and add a comment indicating you plan to participate as a remote attendee 16:14:31 tx daneyon 16:14:43 yw adrian_otto 16:14:44 adrian_otto ok 16:14:48 ok, I will carry this action item forward 16:15:03 #action madhuri to kick off thread about cert provided by stackforge project anchor 16:15:14 2) madhuri to create blueprint on self-signed ca certs via magnum bay-creation 16:15:16 Status? 16:15:38 * adrian_otto scans for a blueprint 16:15:56 This one? 16:15:59 #link https://blueprints.launchpad.net/magnum/+spec/magnum-as-a-ca 16:16:33 Status: complete 16:16:44 should we plan to discuss that BP in the next section of the meeting? 16:17:17 and the last action item 16:17:18 3) all core reviewers to review https://review.openstack.org/#/c/194905/ 16:17:44 not many reviewers have commented on that one yet 16:18:30 I will carry this item forward 16:18:55 #action All core reviewers to review https://review.openstack.org/#/c/194905/ 16:19:04 ya if we are to do specs everyone needs to review them 16:19:18 I have seen instances where people say "hey I never wathed that spec and now its implemented I dont like it" 16:19:21 that happens all the time 16:19:21 #action adrian_otto to begin an ML thread for https://review.openstack.org/194905 16:19:28 the whole point of a spec is to avoie that ;-) 16:19:38 this is why I dont like specs ;) 16:19:39 sdake_: agreed, thanks 16:20:08 let's not underestimate the value of a well considered consensus 16:20:24 of specs is how we get that, it should be worth some admin overhead to get there 16:20:25 agree it makes sense in certain cases, I use them on kolla occasionally 16:20:28 but its not the norm :) 16:20:57 for clarity, Magnum does not require specs yet, but there are encouraged for major features. 16:21:08 we are exploring the use of them now 16:21:25 ok, that brings us to our next topic 16:21:35 #topic Blueprint/Bug Review 16:21:42 New Blueprints for Discussion 16:21:50 #link https://blueprints.launchpad.net/magnum/+spec/hyperstack Power Magnum to run on metal with Hyper 16:22:18 Let me know for questions or thoughts, thx 16:22:26 I would like to learn more about what is Hyper 16:22:44 And how well it fit into Magnum 16:22:46 so this is essentially anothe os_distro type correct? 16:22:56 Hyper is a hypervisor-agnostic Docker runtime 16:22:58 hongbin I think its a good fit for what we are after 16:22:59 sdake_: yes, that's how it's proposed 16:23:30 xu_ will it run k8s? 16:23:43 Hyper allows to run Docker images with any hypervisor, but without the need of a guest OS, like CoreOS, or CentOS 16:23:48 and swarm? 16:23:58 I've suggested that it be implemented for all supported bay types 16:24:05 so the answer should be yes 16:24:11 xu_ our abstraction point is a layer higher then docker 16:24:12 yes, hyper is a very good fit 16:24:25 Hyper is not a cluster system, it is single host engine, like Docker daemon 16:24:34 and looking at the code, it would be easy to extend the Go to make it plug the neutron port types 16:24:46 xu_ just to clarify, CoreOS is not a guest OS but a host OS 16:25:00 Hyper uses the native k8s podfile for atomic scheduling unit 16:25:01 its tap management is clean and easy to grasp 16:25:25 I like the model of hyperv 16:25:46 but it woul need to somehow launch either warm or k8s to be useful in magnum imo 16:25:54 but if it did, that would be fantastic ;) 16:26:01 was hyperv == Microsoft HyperV, or a typo, sdake_? 16:26:02 warm/swarm 16:26:04 this bp is to make magnum run on bare-metal, in parallel with nova, not a component on top of nova. 16:26:08 xu_ re: atomic scheduling so Hyper is more than just a single host daemon? 16:26:23 sorry just got up - feeling sick - whatever tech we are alking about :) 16:26:29 I think you meant Hyper, but just wanated to be sure 16:26:34 ya hyper 16:26:37 ok, tx 16:26:39 daneyon: yes, it's single host 16:26:46 xu_ do you understand what i'm saying? 16:26:51 xu_ with the abstraction point a layer above 16:27:00 magnum abstracts swarm and kubernetes 16:27:03 daneyon re: no, hyper advocates pod, everything is in a unit of pod 16:27:05 magnum doesn't really abstract docker 16:27:13 you'd need magnum to do the orchestrating between hosts 16:27:14 I think stating that this BP allows magnum to run on BM could be confusing or misleading 16:27:25 so this approach makes sense where the compute form factor (nova instance type) is a VM and the use case is running containers with the various Magnum Bay Types 16:27:33 I think hyper could technically fit in, it would just be another coe 16:27:36 Ironic should be the entrypoint for Magnum on BM 16:28:12 apmelton: no, it's not a coe 16:28:20 why not write some code and see how it fits into the system xu_ ? 16:28:22 it needs to be combined with a coe 16:28:40 sdake_: let's take one step back 16:28:41 adrian_otto right this is what I keep getting at, but I don't think xu_ gets it :) 16:28:44 adrian_otto: exactly 16:28:44 xu_ OK so pods are units of mgt, but hyper does not include multi-host scheduling, correct? 16:29:00 what I'd like to do is check with the team to see if there are objections to approving the direction of this BP 16:29:14 sdake yes, i think the idea of abstraction makes sense 16:29:18 looking at this image: https://hyper.sh/img/hyper.png 16:29:20 and allowing the Hyper team to draft a proposal in the form of a spec or reviews 16:29:22 hyper looks a lot like a coe 16:29:23 xu_: hyper is container engine like CoreOS Rocket, isn't it? 16:29:38 it looks like a CoE + an OS distro 16:29:38 hyper is more for some cloud providers who want to build a secure, public, native CaaS, instead of CaaS on top of IaaS. 16:29:40 no objections fro me - think tech looks solid - not sure how its actually going to fit into the system ;) 16:29:48 xu_: correct me if I'm wrong, since I looked at it like 3 weeks ago 16:29:55 eghobo: yes 16:29:58 and part of that should really be an FAQ that addresses the questions that folks like apmelton and eghobo are raising today. 16:30:05 but wouldn't it be possible to just use swarm to orchestrate hyper hosts? 16:30:19 apuimedo: I think that's possible 16:30:20 (hosts running the hyper daemon in place of the docker one) 16:30:43 IIRC it should be. It seems the easiest way to integrate 16:30:44 xu_ so basicly instead of running containers in standard VM's to address container security, run your Docker images in hyper VM's that are lightweight? 16:30:53 apuimedo: it is possible 16:31:02 exactly daneyon ;-) 16:31:09 ok, I'm gong to move to an #agreed to approve the direction of https://blueprints.launchpad.net/magnum/+spec/hyperstack 16:31:16 anyone object to that? 16:31:33 or do you need more time to understand it first? 16:31:34 daneyon: yes, and that will eliminate the guest OS, so kernel+docker image, totally immutable infra 16:31:46 adrian_otto: we need provide more details, before aproving 16:31:57 there are two stages of approval 16:32:06 xu_ I looked at hyper about 4 weeks ago and I like it. I have no practical experience yet, but I like the model 16:32:08 we have directional approval which asks for a proposal (spec) 16:32:17 same here 16:32:29 and we have design approval which accepts the proposal, and expects code to follow 16:32:41 A question 16:32:46 daneyon: thx, hyper is technically similar with Intel ClearContainer and MS HyperV container. 16:32:51 Hyper is going to be in tree or a seperated plugin? 16:32:51 I'm asking about directional approval, which I normally handle without asking first. 16:32:55 xu_ I agree with sdake_ write some code to demo basic functionality. 16:33:16 hongbin: no, this would be in-tree for Magnum 16:33:44 daneyon: sure 16:33:51 adrian_otto: I think I need more time to learn that 16:33:53 ok, so daneyon and sdake appear to be in support of directional approval 16:33:59 realistically the only way xu_ is going to get something that wors reasonablly well with kubernete and swarm is to prototype it not spec it 16:33:59 in-tree swarm flavor? 16:34:24 I am not sure why hyper should be magnum plugin, sounds like it should be kub plugin 16:34:30 but how we get from point a - z i'm not totally concerned about ;) 16:34:57 xu_: are you open to attempting a prototype first? 16:35:18 seems like hyper replaces the docker daemon, so not sure it fits with swarm 16:35:29 or any coe's 16:35:44 xu_ how does hyper do multi-host? 16:35:46 it needs to integrate with swarm 16:35:57 eghobo: hyper works better with neutron and cinder, which means that CaaS=magnum+neutron+cinder+hyper+BM 16:36:43 xu_ does it simply work better with neutron/cinder because containers are encapsulated in VM's? 16:36:57 daneyon: the same way as Docker does multi-host, and probably even better, due to the mature SDN solutions available in hypervisor space 16:37:02 how about we start a mailing list thread about the architecture so we don't eat up our entire meeting agenda :) 16:37:03 so we definitely have interest. I suggest this. I'll put this back on the agenda for next week. In the mean time, let's open an ML thread to discuss the related questions by email with the team 16:37:19 good idea adrian_otto 16:37:23 let's get clarity on the value proposition 16:37:28 and then revisit this. 16:37:31 daneyon: yes, the nature of hypervisor, strong isolation for multi-tenancy public cloud 16:37:35 wfm 16:37:44 you are welcome to write a prototype in the mean time and submit it as a review 16:37:48 xu_: if you could make a diagram of the proposed architecture of the fit with hyper and other OpenStack components 16:37:53 it would be great 16:37:55 but don't expect us to merge it without consensus on the blueprint. 16:37:58 make sense? 16:38:09 +1 on writing a minimal prototype 16:38:18 +1 16:38:20 blueprint or prototype - whatever works 16:38:25 I think xu_ would learn alot from a prototype 16:38:30 or one from hyper 16:38:32 whoever does the work 16:38:41 adrian_otto: it is quite some changes to magnum, even for a prototype. 16:39:19 it's up to you if you work on that or not, but it may speed up a consenesus 16:39:28 ok, will do 16:39:31 I'm happy to make the final call on this, but I want team input 16:39:39 so the ML thread is the first way we will get that 16:40:03 xu_: can you start that thread, or would you like my assistance with that? 16:40:46 adrian_otto: i'm new to openstack, appreciate if you could show some lights! 16:41:05 #action adrian_otto to assist xu_ with starting an ML thread about the Hyper blueprint for Magnum 16:41:12 ok, next subtopic for Blueprints 16:41:13 https://trello-attachments.s3.amazonaws.com/55545e127c7cbe0ec5b82f2b/1095x362/558bcbf7a1ab7aa4b4753b1232d3886f/IaaS_vs_CaaS.png overview arch 16:41:27 Essential Blueprint Updates 16:41:29 #link https://blueprints.launchpad.net/magnum/+spec/objects-from-bay Obtain the objects from the bay endpoint (sdake) 16:41:54 yar 16:41:58 sdake: any updates on this? 16:42:02 this is two milestones worth of work 16:42:10 i was going to try to recruit tom to help ;) 16:42:16 but haven't got there yet 16:42:19 tom, interested in helping? :) 16:42:37 tcommann^^ 16:42:51 tcammann: tcammann_ ? 16:43:00 sory about mispelling 16:43:01 sure 16:43:07 cool lets catch up off line 16:43:15 Cool, I'll revisit this one next week 16:43:18 next... 16:43:19 #link https://blueprints.launchpad.net/magnum/+spec/secure-kubernetes Secure the client/server communication between ReST client and ReST server (madhuri) 16:43:22 i think with both me and tom, it will take both l2 and l3 adrian_otto 16:43:39 madhuri is not with us today, is there anyone working on this who can make an update? 16:43:49 i have looked at the reviews 16:43:50 perhaps apmelton? 16:43:54 they look solid 16:43:59 not sure if they actually work or not 16:44:10 last info I got from her seem to indicate it *does* work properly 16:44:14 adrian_otto: nope, I'll go through the reviews this week and check them out though 16:44:35 ok, I'llr evisit this one next week. 16:44:38 Next one... 16:44:47 #link https://blueprints.launchpad.net/magnum/+spec/external-lb Support the kubernetes service external-load-balancer feature (sdake) 16:44:57 Tango: were you working on this one? 16:45:16 how did I get assigned to that? 16:45:21 Yes, I am 16:45:31 maybe I read it wrong? I can reassign it 16:45:39 update on this, Tango? 16:46:05 I am a bit stuck on getting the latest Kubernetes version to run on Fedora Atomic 16:46:08 oh, I misread the Assignee 16:46:13 sorry sdake! 16:46:26 Asked for help on the Google ML, got a few pointers 16:46:28 you were the registrant, not the assignee 16:46:46 Other people are also complaining about similar problem 16:46:54 although on Red Hat 16:47:02 Tango: do we have what we need to proceed, or are we blocked on upstream work? 16:47:05 Tango: are you doing this on Atomic 22 or 21? 16:47:16 Atomic 21 16:47:25 I am going to try Ubuntu 16:47:30 apmelton he is building with latest version of kube tho 16:47:39 I'm not sure there 16:47:44 This is their version 0.19 16:47:54 was just going to say I know of multiple issues with Docker 1.6.0 in 22 16:48:22 that he may have been hitting via k8s 16:48:26 its likely a configuration change 16:48:27 Tango: maybe Fedora, most kub examples at Fedora distro 16:48:30 like some config options changed 16:48:34 between 0.15 and d0.19 16:48:40 and we need to set those new config options in the template 16:48:46 remember that mess apmelton 16:48:51 or hongbin 16:48:53 * apmelton shudders 16:48:55 There is a few options that get deprecated 16:48:57 this is what tango is up against here 16:49:21 but I may have missed other updates that may be required 16:49:45 ok, so considering the importance of this BP, do we have the right number of Stackers working on this, or do we need more help on this? 16:49:50 ^^ Tango 16:49:52 ya porting to the latest version of kube usually takes 2-3 engineers about 3-4 days in the community of full time work 16:49:53 Tango: a bit later in the week I might be able to help ya poke around 16:50:24 Sure, it would be great to have more eyes on just getting k8s to work 16:50:28 Tango: feel free to ping me, if you need any help on this 16:50:31 They are changing so fast 16:50:39 ok, so team, please help out on this one if you can. 16:50:46 this is why I dont upgrade the images often ;-) 16:50:58 Growing pain :) 16:50:59 Tango: Just let me know if you need some help 16:51:01 because its a huge time sync keeping up with uberenetes 16:51:24 Sounds great, I will give more details on the IRC 16:51:28 s/ubernetes/kubernetes/ 16:51:37 heh 16:51:40 ubernetes lol :) 16:51:44 Ubernetes is actually a thing (federation for Kubernetes) 16:52:05 ok, I have one more BP to get an update on 16:52:07 interesting 16:52:14 #link https://blueprints.launchpad.net/magnum/+spec/secure-docker Secure client/server communication using TLS (apmelton) 16:52:32 I'll know more after poking around the reviews and the spec 16:52:57 ok, we can revisit that one next week 16:53:01 sounds good 16:53:06 #topic Open Discussion 16:53:54 Any ideas when you will know about midcycle dates, I need to get approval to fly 5000 miles 16:54:18 sdake: did we already put the poll on the ML? 16:54:31 yes he did 16:54:48 ok, so let's follow up on that thread with a deadline for selecting the date 16:54:59 date for a date 16:55:21 i set a deadline 16:55:36 great, when is that? 16:55:46 from what I can tell ppl don't seem too intereste din attending 16:55:55 I dont recall but its in the email post 16:56:00 sdake: that always happens 16:56:10 i think deadline today 16:56:13 ok, I'll be responding today 16:56:26 adrian_otto I'll hand off process to you ok? 16:56:29 and my availability is rather limited, so that may help us narrow the date range a bit 16:56:36 adrian_otto I have a midcycle I am already executing planning on 16:56:42 ok, I'll take that one 16:56:50 i just wanted to get it kicked off 16:57:02 #action adrian_otto to select a date for the Midcycle and announce to the team members 16:57:21 ok, we have just a couple of minutes left 16:57:46 tcammann: any agreement about heat templates? 16:57:53 sdake: on the subject of the liberty-1 release, should I tag what's in master right now? 16:57:56 oh yes! thanks eghobo 16:58:01 or is there a specific commit I should tag? 16:58:19 adrian_otto I'll get to you with hash at noonish 16:58:20 Calabasas training opp announcement: http://www.eventbrite.com/e/openstack-quick-start-all-day-community-training-tickets-17622811303 16:58:24 I have dentist appointment soon 16:58:30 ok, tx. 16:58:39 hope your tooth is doing much better by now 16:58:44 me too 16:58:46 it is not in pain 16:58:48 thansk :) 16:58:54 it just needs to be sealed 16:59:02 anyway yikes lets not talk about it plz :) 16:59:06 heh 16:59:10 Can we put that as an action to discuss for next time, the future of heat-coe-templates 16:59:22 sdake heh ok 16:59:59 adrian_otto: ^ 17:00:12 ok, thanks everone. Our next meeting is Tuesday 2200 UTC on 2015-07-14. And our network subteam meeting is on Thursday 2015-07-16 at 1800 UTC. 17:00:23 thx ao 17:00:24 tcammann: got it, thanks! 17:00:26 ty 17:00:29 #endmeeting