16:59:24 #startmeeting containers 16:59:25 Meeting started Thu Jun 21 16:59:24 2018 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:59:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:59:28 The meeting name has been set to 'containers' 16:59:31 #topic Roll Call 16:59:36 o/ 16:59:38 o/ 16:59:43 o/ 17:00:22 o/ 17:00:53 Thanks for joining the meeting imdigitaljim cbrumm flwang1 17:00:57 #topic Announcements 17:01:14 o/ 17:01:26 o/ 17:02:14 We can have this meeting for 3-4 weeks to see if it works for people, then we decide either to have two meetings, alternate or change to this hour Thursdays or Tuesdays 17:02:32 hello jslater and ARussell 17:02:49 #topic Blueprints/Bugs/Ideas 17:02:58 jslater: ARussell: are you also from blizzard? 17:03:16 Yes, they both are. 17:03:24 correct 17:03:33 wow, big team here 17:03:34 Jim, and Jslater are two of my engineers. Arussell is my PM 17:03:46 and colin who just joined 17:03:49 Usually in this section of the meeting we report what are we working on and ask for help/feedback 17:04:01 cbrumm: excellent :) 17:04:17 sorry for being late 17:04:28 colin-: no worries 17:04:49 we're working on changes with Magnum to enable cloud-controller-manager to work with OpenStack. 17:04:55 good to see all you folks 17:05:14 imdigitaljim: cool, we have a story, just a moment 17:05:28 so far we've had to add back the kube-proxy to the master, which should maybe optional rather than just simply removed 17:05:53 #link https://storyboard.openstack.org/#!/story/1762743 17:06:45 imdigitaljim: yes, we went that direction because for almost two year we didn't have any use case for proxy and kubelet in the master nodes 17:06:57 We can put them back, 17:07:30 we've also explored the idea on adding another k8s feature-gate (labels) to have post-cluster creation scripts/manifests for more customization 17:08:23 Add extra software deployments? 17:08:46 imdigitaljim: gerrit gate? 17:09:08 we have a number of different customers with varying requirements and adding it to magnum would be inefficient/ineffective 17:09:14 We investigated this option for some time but we had only software configs available 17:09:39 imdigitaljim: so at CERN, we do this already but we patch magnum 17:10:01 is that patch in your public gitlab? 17:10:04 we add extra labels to enable/disanble CERN-specific features 17:10:11 strigazi: if we can host those scripts/yaml somewhere (on github), this could be doable? 17:10:12 but simplying being able to do --labels extra_manifests="mycode.sh" 17:10:23 can you see this? https://gitlab.cern.ch/cloud-infrastructure/magnum 17:10:38 yeah we have access here 17:10:40 Yes, thank you 17:11:15 imdigitaljim: not like this, but we want what you just mentioned too 17:11:57 yeah we're not set on that design just the idea of having the feature 17:12:04 for example we added cephfs this week https://gitlab.cern.ch/cloud-infrastructure/magnum/commit/42d35211c540c9b86eb91fd1c042505dbddbfcef 17:12:44 why not have a label that links to a single file with additional parameters to source? 17:12:47 imdigitaljim: flwang1 I think the extra script is better to be saved in the magnum DB or live as a config option 17:13:06 so you can add n parameters on a single label 17:13:56 strigazi: i also realized using label for new feature config is hard to manage sometimes 17:14:10 imdigitaljim: we can have a generic field yes 17:14:16 how has ceph been doing since you added it? curious about reliability of attaches/detaches/unexpected losses 17:14:18 one more json 17:14:41 colin-: scale tests are planned for july 17:15:10 we will try crach our testing cephfs basically :) 17:15:18 *crash 17:16:10 We just finished the implementation of cephfs-csi, so we can't tell for cephfs. Ceph however is going very well 17:16:34 does anyone here use octavia with magnum currently? 17:16:41 flwang1: ^^ 17:16:56 good to hear, we have a storage team that might be looking at ceph in the future 17:16:59 we don't have octavia 17:17:22 we've got plans to hit it (hopefully by the end of july ;D) 17:18:10 since it has support in the cloud controller already I'm not sure there will be much magnum work for that 17:18:22 so we have 10PB out 15 PB in use on two ceph clusters 17:18:29 might have to just add a switch for octavia vs neutron 17:19:01 strigazi: we're using octavia 17:19:10 we just release it yesterday on one region 17:19:10 curious, is CERN using another LoadBalancer provider integrated with the in-tree Kubernetes/Openstack libraries? Perhaps a hardware device+driver? 17:19:11 flwang1: good to know :) 17:19:22 now we're going to upgrade heat, and then deploy magnum 17:19:24 flwang1: i might need to pick your brain later 17:19:41 colin-: we use traefik and an in-house DNS lbaas 17:19:57 strigazi: got it, thanks 17:20:02 colin-: so some nodes are labeled to run traefik 17:20:10 and these nodes use the same alias 17:20:19 what are plans to upgrade heat templates in magnum to something like queens and do refactoring? 17:20:24 the alias is created manually, not very dynamic 17:20:34 sounds interesting :) 17:21:10 imdigitaljim: what do you mean by upgrade, what changes do yo need? 17:21:17 imdigitaljim: what do you mean by upgrade, what changes do you need? 17:21:41 we could use later heat template versions and new features with them 17:21:47 to cleanup some workflows 17:22:04 https://docs.openstack.org/heat/latest/template_guide/hot_spec.html 17:23:01 let's start the discussion to see which version works 17:23:47 in queens only filter is new? 17:23:55 mostly right now we use Juno 17:23:56 2014-10-16 17:24:05 so maybe not be 7 versions behind 17:24:25 theres a lot more than filter ;) 17:24:40 The minimum that we should go is Newton which is not eol 17:24:58 i think newton is eol now 17:25:12 ok, I see the diff 17:25:53 can we have a etherpad to track this ideas? 17:25:55 pike has quite a lot of things 17:26:03 i was hoping pike 17:26:12 so that we can plan these work in Stein 17:26:27 (pike at minimum) 17:26:29 flwang1: we have the log of the meeting we can add stories 17:26:33 for queens+ 17:26:38 ok, fair enough 17:27:03 #action add story to upgrade the heat-template version to pike or queens 17:27:42 #action add story for post create deployments 17:28:11 we have a story for the cloud-provider already 17:28:13 thanks strigazi 17:28:14 kube-proxy/kubelet optionally included for master 17:28:15 ? 17:28:34 imdigitaljim let's add them back 17:28:36 imdigitaljim: if you're using calico, we have alreay added kubelet on master 17:28:39 flwang1: thoughts? 17:28:51 strigazi: i'm OK with that ;) 17:29:02 all CNI's should be separated from these 17:29:03 imdigitaljim: what's the network driver you're using btw? 17:29:17 we use calico so we didnt have the kubelet problem 17:29:25 Calico 17:29:31 #action add story to put kubelet/proxy back in the master nodes 17:29:49 imdigitaljim: good to know you're using calico :D 17:29:56 https://review.openstack.org/#/c/576623/ 17:30:12 strigazi: that was where the confusion came from here 17:30:26 we should generally decouple master components from CNI 17:31:28 flannel doesn't need the proxy on the master but if the master node wants to ping a service it is required 17:32:06 strigazi: so let's get them back? 17:32:13 We can have a script per CNI, flwang1 started it 17:32:17 flwang1: yes 17:32:20 yeah that sounds great 17:32:27 I'll add the story 17:32:45 strigazi: then my next question is should it be backported to queens? 17:33:17 given we removed them in queens 17:33:45 flwang1: we can 17:33:56 We can ask in the ML 17:34:06 but it also depends on what we need 17:34:27 cern, catalyst, blizzard, other users 17:34:54 I think it's good, the use cases for it, increased quite a lot 17:35:53 cool cool 17:35:53 sidenote: we also have someone here working on magnum for gophercloud 17:36:00 cern doesn't have a big stake at it, we cherry-pick anyway what we need 17:36:16 imdigitaljim: \o/ 17:36:21 imdigitaljim: i already have a patch for gophercloud to support magnum 17:36:22 imdigitaljim: to add the client? 17:36:26 just doing some testing now 17:36:32 strigazi: yes 17:36:42 so that user can use terraform to talk to magnum api 17:36:47 all I hear is autoscaling 17:37:04 and i'm also working on support mangum in ansible/shade 17:37:07 imdigitaljim: ^ 17:37:17 flwang1: I'll make sure our guy contacts you about gophercloud 17:37:42 cbrumm: no problem 17:37:43 please cc me in github 17:37:48 ansible/shade is good, we will likely need terraform too 17:38:05 strigazi: for what? the gophercloud work? 17:38:11 yes, you have pr? 17:38:16 yes, you have PRs? 17:38:23 not yet, in my local 17:38:24 yeah let me know and i can connect you with our engineer on gophercloud 17:38:44 i will start to submit probably since today or early next week 17:39:08 good to know, we don't want to duplicate effort 17:39:49 #link magum support in gophercloud https://github.com/gophercloud/gophercloud/issues/1003 17:40:31 user on that link JackKuei is working with us 17:40:33 Yeah, JackKuei is our guy at the bottom of that 17:40:53 flwang1: for your Calico deployment are you using the iptables or ipvs method? 17:41:02 flwang1: you talked to him already 17:41:25 imdigitaljim: nice, i will interlock with him/her 17:41:35 oh just curious question, do you use systemd vs cgroupfs and why? 17:41:52 flwang1: ++ magnum support in ansible/shade 17:41:57 related here 17:41:58 https://review.openstack.org/#/c/571583/ 17:42:06 mordred: hah, good to see you here 17:42:11 * mordred lurks everywhere 17:42:11 imdigitaljim: systemd because it the default in docker in fedora 17:42:25 mordred: i'm still waiting for your answer about the upgrade 17:42:45 colin-: can we talk offline? i need to check the config 17:42:48 imdigitaljim: we will move to cgroupfs with this option 17:42:50 for sure 17:43:23 i havent heard from brtknr but i'd like to get this pushed through 17:43:24 flwang1: oh - I responded in channel this morning ... but maybe the irc gods were not friendly to us 17:43:57 flwang1: http://eavesdrop.openstack.org/irclogs/%23openstack-sdks/latest.log.html#t2018-06-21T11:42:24 17:43:59 mordred: oh, ok, i'm on laptop now, i will check on my workstation later 17:44:03 thank you! 17:44:05 flwang1: sweet 17:44:11 strigazi: would i be able to complete this story on his behalf? https://review.openstack.org/#/c/571583/ 17:44:27 mordred: i will start the coding work soon, pls review it 17:45:05 I don't think brtknr will mind you can push a patchset, I tested PS3 17:45:06 imdigitaljim: i would suggest hold a bit because i think brtknr is still active 17:45:22 you can leave a comment in gerrit 17:45:58 the patch is in good state 17:46:11 flwang1: sounds good 17:47:04 the patch would technically break ironic 17:47:26 i just was gonna add a compatibility portion of it 17:47:43 Do you use it? the ironic driver? 17:48:01 no, just didnt know if that was a concern 17:48:12 We use the fedora-atomic one with ironic 17:48:13 or would/could we consider reducing some of our support of other drivers 17:48:18 colin-: re your calico question, we're using iptables 17:48:31 We want to remove the ironic driver 17:48:33 colin-: IIRC, the ipvs is still in beta? 17:48:41 strigazi: +1 17:49:25 folks, let's start wrapping up and plan for next week 17:49:48 flwang1: good to know thanks for checking! 17:49:56 Thank you all for having this meeting at this time 17:50:11 well great meeting! thanks for accommodating us with this meeting, we really appreciate it 17:50:23 mordred: read your comments, sounds good idea, i will go for that direction, thanks a lot 17:50:26 cbrumm: thank you for joining, it was great to see you all 17:50:52 nice chatting with you folks, will idle here more often 17:50:55 nice to be back on irc 17:51:04 colin-: :) 17:51:06 good to see you guys 17:51:18 it's 5:51 AM, very dark 17:51:30 waves from NZ 17:51:33 flwang1: woot! 17:51:34 o/ 17:51:56 enjoy your friday flwang1 17:52:02 flwang1: it also is probably a better idea to focus on the copy of the shade code in the openstacksdk repo ... 17:52:03 flwang1 is a hero 17:52:30 strigazi: i need a medal 17:52:33 flwang1: the ansible modules use it now - and I'm hoping to work soon on making shade use the code in sdk - so less churn on your part 17:52:36 * mordred hands flwang1 a medal 17:52:45 ok, I think it is better to summarize the meeting in an email 17:52:58 there quite some content 17:53:19 mordred: so do you mean implement the code in openstacksdk instead of in shade? sorry if it's a newbie question 17:53:36 flwang1: definitely, I can bring you a CERN helmet in Berlin :) 17:54:27 strigazi: logged 17:54:31 and beer 17:55:04 blizzard folks you should pass by here and pick yours ;) 17:55:22 :D 17:55:37 flwang1: deutsche beer 17:55:37 can I get a logo of blizzard? 17:55:46 last action: 17:56:05 #action strigazi to summarize the meeting in a ML mail 17:57:11 flwang1: you mean like this? https://bit.ly/2MdWCOn 17:57:33 thank you! haha 17:58:03 Let's end the meeting then :) Thank you all! 17:58:28 #endmeeting