16:00:19 <adrian_otto> #startmeeting containers 16:00:20 <openstack> Meeting started Tue Oct 11 16:00:19 2016 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:21 <adrian_otto> #link https://wiki.openstack.org/wiki/Meetings/Containers#Agenda_for_2016-10-11_1600_UTC Our Agenda 16:00:22 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:25 <openstack> The meeting name has been set to 'containers' 16:00:28 <adrian_otto> #topic Roll Call 16:00:29 <strigazi> o/ Spyros 16:00:30 <adrian_otto> Adrian Otto 16:00:37 <hongbin> o/ 16:00:39 <tonanhngo> Ton Ngo 16:00:51 <eghobo> o/ 16:01:20 <Drago> o/ 16:01:37 <rpothier> Rob Pothier 16:02:07 <adrian_otto> hello strigazi hongbin tonanhngo eghobo Drago rpothier 16:02:16 <dane_leblanc> o/ 16:02:42 <adrian_otto> hello dane_leblanc 16:03:14 <adrian_otto> ok, let's continue, Feel free to chime in at any time to be recorded in attendance. 16:03:15 <adrian_otto> #topic Announcements 16:03:22 <adrian_otto> 1) Reminder: There will be no team meeting on 2016-10-25 because that is the week of the OpenStack Summit in Barcelona. 16:03:31 <adrian_otto> 2) Please review our NodeGroup spec draft to prepare for our Summit discussion on this topic: 16:03:38 <adrian_otto> #link https://review.openstack.org/352734 [WIP] Add NodeGroup specification 16:04:09 <adrian_otto> We'd like to get as much (+core) team feedback prior to the summit as possible, so we can be very clear with our community about what to expect 16:04:14 <Drago> Will we be having a discussion on the NodeGroup spec in our meeting next week? 16:04:33 <adrian_otto> yes, and we can touch on it today as well, time permitting 16:04:34 <Drago> So we can do the fishbowl before the summit design session 16:04:47 <adrian_otto> is anyone prepared to have that discussion today, or do you need time to review it first? 16:05:04 <Drago> I have pushed another update to the spec, but it's mostly to move things around 16:05:32 <strigazi> I want to do one more detailed pass 16:05:37 <tonanhngo> I would need time 16:05:43 <adrian_otto> ok, we'll come prepared for a discussion next meeting. 16:06:06 <adrian_otto> any announcements form team members? 16:06:19 <adrian_otto> s/form/from/ 16:06:25 <adrian_otto> ok, advancing topics... 16:06:32 <adrian_otto> #topic Review Action Items 16:06:43 <adrian_otto> #action adrian_otto follow up with Kuryr PTL to arrange a joint session 16:07:00 <adrian_otto> Status: Pending. I expect to complete this today. 16:07:09 <adrian_otto> #topic Essential Blueprint Review 16:07:18 <adrian_otto> Support baremetal container clusters (strigazi) 16:07:24 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/magnum-baremetal-full-support 16:07:29 <adrian_otto> any update? 16:08:29 <adrian_otto> strigazi: you still there? 16:08:32 <strigazi> We still test the fixed_network work, after that we can hav the same driver for vm and bm 16:09:07 <adrian_otto> so additional work is ongoing, correct? 16:09:19 <strigazi> yes 16:09:22 <adrian_otto> do you need any help from the team to move this forward? 16:09:27 <adrian_otto> *from 16:09:38 <adrian_otto> lazy fingers today 16:10:03 <strigazi> In testing, if you will to deploy devstack with ironic 16:10:12 <strigazi> and have a big enough host 16:10:27 <strigazi> In a vm is pretty slow 16:10:31 <adrian_otto> what size should we recommend? 16:10:57 <strigazi> >=14 GB of RAM and >=8 cores 16:11:25 <adrian_otto> let's plan to put that in the operator guide 16:11:43 <strigazi> Ok, I'll do it this week 16:11:46 <adrian_otto> any remarks on this work from the team? 16:11:57 <strigazi> I have a concern 16:11:59 <adrian_otto> ok 16:12:36 <strigazi> about having this as essential, is it really? For us is a 6 month plan 16:12:45 <strigazi> we don't have ironic yet 16:13:47 <strigazi> Do you mind moving it to high? 16:13:52 <adrian_otto> Is there any objection to downgrading it? I think hongbin set it with this priority because we felt it was important to deliver BM support in N. 16:14:13 <hongbin> i have no problem to change the priority 16:14:25 <adrian_otto> considering Newton is out now, it makes sense to relax this. 16:14:26 <strigazi> I expect to finish during Nov 16:14:42 <adrian_otto> we did make progress on this for Newton, enough that it's still noteworthy. 16:14:53 <adrian_otto> let's adjust it right now, 1 sec. 16:14:57 <strigazi> ok, thanks 16:15:20 <adrian_otto> Done. I will strike it from the next agenda. 16:15:41 <adrian_otto> #action adrian_otto to remove BM Blueprint from Essential BP Review on team agenda 16:15:43 <strigazi> thanks 16:15:50 <adrian_otto> next one... 16:15:58 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/user-guide Magnum User Guide for Cloud Operator (tango) 16:16:14 <tonanhngo> I don't have any update. 16:16:19 <adrian_otto> tonanhngo: should this also be reset to High? 16:16:27 <tonanhngo> That would be good also 16:16:28 <adrian_otto> the same logic applies here 16:16:33 <adrian_otto> ok, doing it... 16:16:39 <tonanhngo> Thanks 16:16:52 <adrian_otto> oh, it already is. 16:17:04 <adrian_otto> #action adrian_otto to remove Docs Blueprint from Essential BP Review on team agenda 16:17:22 <adrian_otto> ... drumroll please ... 16:17:34 <adrian_otto> #link https://blueprints.launchpad.net/magnum/+spec/bay-drivers COE Bay Drivers (muralia) 16:17:37 <adrian_otto> Status: COMPLETE 16:17:39 <strigazi> We should define the essential bps in the summit 16:17:44 <adrian_otto> the code merged yesterday! 16:17:49 <Drago> I have muralia's update 16:17:58 <adrian_otto> thanks Drago 16:18:02 <Drago> "The only update I have is that my patch got merged" 16:18:06 <Drago> (there's one more update) 16:18:17 <Drago> "I need one more patch to address 2 things. release notes and docs. and to move the 'cluster delete' method to the drivers." 16:18:19 <adrian_otto> hah! 16:18:21 <Drago> That's it 16:18:32 <adrian_otto> excellent. 16:18:55 <adrian_otto> thanks to all the contributors who helped with making and reviewing that contribution. 16:19:05 <adrian_otto> it's nice to see it completed. 16:19:10 <adrian_otto> (almost) 16:19:21 <adrian_otto> any other remarks on this BP? 16:19:41 <adrian_otto> #topic Other Blueprints/Bugs/Reviews/Ideas 16:19:47 <adrian_otto> Kuryr Integration Update (tango) 16:19:53 <adrian_otto> tonanhngo: anything to share? 16:20:03 <tonanhngo> I attended the meeting yesterday 16:20:14 <tonanhngo> Mostly the team continues with the implementation 16:20:33 <adrian_otto> what was the outcome on the consideration of our security concern? 16:20:47 <adrian_otto> I got the sense that triggered a reset of sorts 16:20:56 <tonanhngo> There was some discussion on the nested VM implementation, but no new development on the driver regarding the security concern 16:21:24 <tonanhngo> Yes, they were still storing a credential for the agent to talk to the driver 16:21:25 <adrian_otto> ok 16:21:36 <hongbin> for this, i can brief talk about that 16:21:52 <hongbin> i submitted a bp for kuryr team to split the agent into two 16:21:52 <adrian_otto> thanks hongbin 16:22:13 <hongbin> the first agent is for interfacing with neutron, the second agent is for doing the port binding 16:22:19 <adrian_otto> can we share the link to that BP? 16:22:27 <hongbin> one second 16:22:50 <hongbin> #link https://blueprints.launchpad.net/kuryr/+spec/split-libnetwork-agent 16:23:02 <adrian_otto> tx! 16:23:09 <hongbin> the goal is to have one agent deployed on master node, another deployed on worker node 16:23:27 <hongbin> the credential will be stored in the master node, and used by the master agent to talk to neutron 16:23:29 <adrian_otto> good idea, hongbin 16:23:40 <hongbin> this will release the credential concern a little 16:23:54 <hongbin> that is all from me 16:23:59 <tonanhngo> so there is no credential on the worker node? 16:24:00 <adrian_otto> did we put any thought into what would control access to the remote portion of that agent? 16:24:07 <hongbin> tonanhngo: no 16:24:22 <hongbin> tonanhngo: the worker node will do the port binding purely 16:24:30 <tonanhngo> The master node is still a user VM though? 16:24:45 <hongbin> adrian_otto: no so far 16:24:52 <hongbin> adrian_otto: but will get into that 16:25:10 <hongbin> tonanhngo: yes 16:25:10 <adrian_otto> ok, that's promising progress. Thanks so much for the update and your contribution. 16:25:27 <adrian_otto> any other remarks on the topic of Kuryr? 16:25:48 <adrian_otto> ok, advancing... 16:25:52 <adrian_otto> magnum-specs repository (strigazi) 16:26:18 <strigazi> I'd like to add a magnum-specs repo to publish our specs here: http://specs.openstack.org/ 16:26:19 <adrian_otto> anythign to cover on this topic this week, or is this an artifact from last time? 16:26:58 <adrian_otto> great, I love that idea. 16:27:04 <strigazi> It's from last time, do you hava an objection? 16:27:31 <adrian_otto> does the team have any concerns about proceeding with this 16:27:33 <adrian_otto> ? 16:27:37 <Drago> +1 16:27:39 <hongbin> i think this is a good idea as well 16:27:43 <tonanhngo> +1 16:27:47 <vijendar> +1 16:28:00 <strigazi> ok 16:28:08 <adrian_otto> sounds like we are all-in! 16:28:13 <strigazi> #action strigazi to create a magnum-specs repo 16:28:26 <strigazi> or you must do it? 16:28:40 <strigazi> I mean the action 16:28:41 <adrian_otto> GIve it a try 16:28:50 <Drago> he did 16:28:52 <adrian_otto> LMK if you need anything from me to proceed 16:28:57 <Drago> oh 16:29:01 <adrian_otto> #action strigazi to create a magnum-specs repo 16:29:16 <adrian_otto> It would be nice if meetbot would ACK actions. 16:29:16 <strigazi> anyway, I'll do it 16:29:20 <adrian_otto> anyway, it's recorded. 16:29:42 <adrian_otto> any other discussion before we advance the agenda? 16:29:50 <strigazi> Since we have some time, I started to draft this bp https://blueprints.launchpad.net/magnum/+spec/cluster-driver-upgrades 16:30:20 <strigazi> It is incomplete but I want to add as much content as possible until the summit 16:30:36 <adrian_otto> good 16:30:52 <adrian_otto> directionally approved 16:30:53 <hongbin> strigazi: so the bp proposed to use software config to do the upgrade? 16:31:33 <strigazi> Yes, but if we decide to have a magnum agent we change 16:31:35 <Drago> strigazi: Have you started a spec yet? 16:31:54 <strigazi> Yes, but not pushed, I'll push tomorrow 16:31:57 <hongbin> strigazi: then the bp needs to revise to avoid the restriction on using software config 16:32:32 <strigazi> That's why is drafting. This bp will introduce two things 16:32:39 <Drago> I think that which agent is used is a more minor detail in how upgrades are achieved 16:32:41 <strigazi> How to version drivers 16:32:51 <strigazi> and how to actually do the upgrades 16:33:01 <hongbin> Drago: agree 16:33:11 <adrian_otto> +1 16:33:25 <hongbin> strigazi: get that. my concern is the implementation details are over-specified in the bp 16:33:54 <strigazi> We'll iterate on that 16:33:56 <adrian_otto> strigazi: ok, we should make sure that's in the Summit Session Planning etherpad, which we will get to in a moment. 16:34:08 <adrian_otto> but before we get to that... 16:34:19 <adrian_otto> #topic Auth Plugin Discussion 16:34:34 <adrian_otto> vijendar has been working on this. The question is: Should the binary plugin for Swarm auth plugin be baked into our cluster node image, or downloaded separately? 16:34:41 <hongbin> for this topic, i have a big concern on the direction 16:35:03 <vijendar> https://blueprints.launchpad.net/magnum/+spec/docker-authz-plugin 16:35:09 <adrian_otto> #link https://review.openstack.org/383061 (WIP)Docker auth plugin to prevent deletion of infra containers 16:35:14 <hongbin> i have went through the bp, and not sure if this is part of the magnum mission to maintain such plugin 16:35:40 <hongbin> 1. the plugin is not written in python 16:35:45 <adrian_otto> well, let's make it clear what this is for. 16:36:05 <adrian_otto> the purpose of this is to prevent users from accidentally deleting portions of their COE 16:36:19 <adrian_otto> which is a feature for cloud operators who run Magnum 16:36:33 <hongbin> is that specific to magnum? 16:36:40 <adrian_otto> the plugin could certainly be written in python if there was a good reason for that. 16:36:44 <hongbin> it sounds like a general purpose plugin to me 16:37:07 <adrian_otto> yes, that's why I asked for it to be part of the contrib tree, and not in our main code base. 16:37:38 <hongbin> ok, if it will go to contrib tree, then it is fine 16:37:55 <strigazi> it definately must be in our repo 16:37:55 <adrian_otto> but I do understand your concern, hongbin. If there is a better approach, what do you suggest? We don't really care where it lives as long as we integrate the result. 16:38:31 <adrian_otto> but whether or not the plugin is in our repo, the hook to add it to Magnum clusters should be 16:38:49 <Drago> Unless it's some sort of OpenStack-wide mandate, I do not see how the language used matters. 16:38:52 <hongbin> adrian_otto: i would argue that it should go to some docker upstream repo or other teams. if it needs to go into magnum, it should be in contrib tree 16:38:53 <adrian_otto> vijendar: I think this is contemplated as an optional feature, right? 16:39:14 <vijendar> adrian_otto: current change does not make it optional 16:39:28 <vijendar> adrian_otto: but we can make it optional 16:39:31 <hongbin> Drago: the openstack said limit the usage of programming lauguage such as go land 16:40:13 <hongbin> well, it looks this discussion should go into the summit 16:40:29 <adrian_otto> hongbin: that decision was about OpenStack API services. What we are after here is a completely different scope. 16:40:52 <adrian_otto> that woudl be like saying storage device drivers must be written in python to be available in Cinder. Its apples/oranges. 16:40:53 <hongbin> adrian_otto: could you elaborate? 16:41:23 <adrian_otto> there is an abstraction interface that allows docker plugins to be done in any language. 16:41:43 <adrian_otto> and if you look at the plugin, what it does is completely trivial 16:41:53 <hongbin> adrian_otto: yes, get that. but if it is not written in python, it will cause problem to deliver and package that plugin i guess 16:41:55 <adrian_otto> there is no possible way it would ever use any component of oslo, for example 16:42:19 <adrian_otto> and that's really what the OpenStack TC is concerned about 16:42:41 <strigazi> Can we verify from TC that it is ok to have it in go? 16:42:59 <adrian_otto> if we feel that's necessary 16:43:00 <strigazi> for sure it is completely different from solo 16:43:01 <strigazi> oslo 16:43:31 <adrian_otto> let's just think about what the lowest friction approach to getting the desired outcome... we want Magnum clusters that user's can't accidentally destroy through normal use. 16:43:55 <adrian_otto> the patch in question is one approach to solving that. 16:44:09 <hongbin> i think this is a good feature, but i would argue that this should be a optional feature 16:44:31 <adrian_otto> yes, we can certainly have a follow-up patch to add it as optional. 16:44:49 <vijendar> sureā¦ we can make it an optional feature 16:44:52 <hongbin> by default, users should get a standard coe cluster without any plugins (other than the official one) to surprise them 16:44:57 <adrian_otto> (although I honestly can't imagine a cloud operator deciding to turn that off) 16:45:14 <adrian_otto> hongbin: good point. 16:45:40 <strigazi> One more thing to think about is how to deliver the code 16:45:49 <strigazi> I mean the binary 16:45:57 <vijendar> but at the same time, user could accidentally delete their infra containers if we don't enable this plugin 16:46:22 <hongbin> then, the operators will enable the plugin if they care about this 16:46:25 <vijendar> strigazi: currently I am thinking about following options 16:46:27 <vijendar> Plugin installation options: 16:46:27 <vijendar> 1. Pre-install docker authorization plugin on the image 16:46:27 <vijendar> 2. Download the authorization plugin on cluster creation and install 16:46:27 <vijendar> 3. Download the authorization plugin source code on cluster creation and then build and install 16:46:55 <strigazi> We can have a look on what we can do on infra 16:47:12 <strigazi> Can you bake a script to build the binary? 16:47:24 <vijendar> strigazi: I think we can 16:47:33 <strigazi> Even in a container 16:47:40 <Drago> Since this is a driver-level feature, it could be optional in the sense that if an operator didn't want it, they could fork the driver and remove it 16:47:58 <strigazi> true 16:48:19 <Drago> sorry, @hongbin 16:48:19 <strigazi> but it depends on how we implemented it, to be easy to remove 16:48:20 <adrian_otto> ok, so let's look at the three options vijendar mentioned above 16:48:36 <hongbin> Drago: well, that is your point. i respect 16:48:38 <adrian_otto> today we use the #1 approach for basically everything 16:48:49 <adrian_otto> so we could do it that way to be consistent. 16:49:11 <hongbin> i would still argue that the operations should add the driver instead of remove 16:49:22 <hongbin> s/operation/operators/ 16:49:23 <adrian_otto> alternatively, we could use one of the other approaches which could simplify image creation a bit. 16:50:36 <adrian_otto> one disadvantage of downloading a binary is that it may not be as safe as baking it into the image 16:50:49 <adrian_otto> it makes it harder to audit as well 16:51:18 <strigazi> If I have a straight forward way to build the binary I can put it in the infra build easily 16:51:38 <adrian_otto> strigazi: that's my preference, but I wanted to consider wider team input as well. 16:51:52 <adrian_otto> hongbin: I have noted your objection, and we will revisit this as a team. 16:51:55 <vijendar> strigazi: I can work on it 16:52:06 <strigazi> thanks 16:52:22 <hongbin> adrian_otto: yes, would like to discuss this further with the team as well 16:52:24 <vijendar> strigazi: I mean, I can work on building the binary script 16:52:32 <adrian_otto> ok, one more thing before open discussion, as we are running low on time... 16:52:40 <strigazi> DDDDvijendar, yeap, get it 16:52:46 <strigazi> vijendar, yeap, get it 16:52:56 <adrian_otto> #topic Summit Session Planning Check-up 16:53:01 <adrian_otto> #link https://etherpad.openstack.org/p/magnum-ocata-summit-topics Summit Planning 16:53:19 <adrian_otto> please make sure all topics you want to cover at the Summit are listed on this etherpad 16:53:25 <adrian_otto> I'm going to match them to slots this week. 16:53:37 <adrian_otto> #topic Open Discussion 16:54:29 <tonanhngo> Just want to mention that we have Magnum Newton running on a 100 nodes OpenStack 16:54:46 <tonanhngo> Big thanks to Rackspace team for helping to build the environment 16:55:15 <adrian_otto> strigazi: please check that cluster driver upgrades are adequately represented in the above etherpad. 16:55:27 <strigazi> ok 16:55:33 <adrian_otto> tonanhngo: maybe explain a bit about why we did it 16:56:12 <tonanhngo> The CNCF Lab provides us a large cluster to do scalability study 16:56:27 <tonanhngo> This is similar to the OSIC lab 16:57:02 <tonanhngo> Their interest is mainly in Kubernetes, but they help us because of the common interest 16:57:34 <tonanhngo> So in the past 2 weeks, we have been building OS Newton on this environment, from scratch 16:58:05 <tonanhngo> We looked at using Kolla, Openstack Ansible, and other tools 16:58:23 <tonanhngo> but in the end, we used Openstack Ansible because it was ready for Newton 16:58:49 <tonanhngo> The Rackspace team (Adrian, Chris, Drago) has expertise on this, so they stepped in and helped 16:59:14 <tonanhngo> Now we are ready to run a collection of benchmarks, including Rally plugin for Magnum 16:59:37 <tonanhngo> We will share the result at the Summit, both in a talk and in a design session 16:59:51 <tonanhngo> That's the story :) 17:00:53 <adrian_otto> thanks everyone 17:01:01 <adrian_otto> see you next week at 1600 UTC! 17:01:04 <adrian_otto> #endmeeting