16:59:24 <strigazi> #startmeeting containers
16:59:25 <openstack> Meeting started Thu Jun 21 16:59:24 2018 UTC and is due to finish in 60 minutes.  The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:59:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:59:28 <openstack> The meeting name has been set to 'containers'
16:59:31 <strigazi> #topic Roll Call
16:59:36 <strigazi> o/
16:59:38 <imdigitaljim> o/
16:59:43 <cbrumm> o/
17:00:22 <flwang1> o/
17:00:53 <strigazi> Thanks for joining the meeting imdigitaljim cbrumm flwang1
17:00:57 <strigazi> #topic Announcements
17:01:14 <jslater> o/
17:01:26 <ARussell> o/
17:02:14 <strigazi> We can have this meeting for 3-4 weeks to see if it works for people, then we decide either to have two meetings, alternate or change to this hour Thursdays or Tuesdays
17:02:32 <strigazi> hello jslater and ARussell
17:02:49 <strigazi> #topic Blueprints/Bugs/Ideas
17:02:58 <flwang1> jslater: ARussell: are you also from blizzard?
17:03:16 <cbrumm> Yes, they both are.
17:03:24 <jslater> correct
17:03:33 <flwang1> wow, big team here
17:03:34 <cbrumm> Jim, and Jslater are two of my engineers. Arussell is my PM
17:03:46 <cbrumm> and colin who just joined
17:03:49 <strigazi> Usually in this section of the meeting we report what are we working on and ask for help/feedback
17:04:01 <strigazi> cbrumm: excellent :)
17:04:17 <colin-> sorry for being late
17:04:28 <strigazi> colin-: no worries
17:04:49 <imdigitaljim> we're working on changes with Magnum to enable cloud-controller-manager to work with OpenStack.
17:04:55 <flwang1> good to see all you folks
17:05:14 <strigazi> imdigitaljim: cool, we have a story, just a moment
17:05:28 <imdigitaljim> so far we've had to add back the kube-proxy to the master, which should maybe optional rather than just simply removed
17:05:53 <strigazi> #link https://storyboard.openstack.org/#!/story/1762743
17:06:45 <strigazi> imdigitaljim: yes, we went that direction because for almost two year we didn't have any use case for proxy and kubelet in the master nodes
17:06:57 <strigazi> We can put them back,
17:07:30 <imdigitaljim> we've also explored the idea on adding another k8s feature-gate (labels) to have post-cluster creation scripts/manifests for more customization
17:08:23 <strigazi> Add extra software deployments?
17:08:46 <flwang1> imdigitaljim: gerrit gate?
17:09:08 <imdigitaljim> we have a number of different customers with varying requirements and adding it to magnum would be inefficient/ineffective
17:09:14 <strigazi> We investigated this option for some time but we had only software configs available
17:09:39 <strigazi> imdigitaljim: so at CERN, we do this already but we patch magnum
17:10:01 <cbrumm> is that patch in your public gitlab?
17:10:04 <strigazi> we add extra labels to enable/disanble CERN-specific features
17:10:11 <flwang1> strigazi: if we can host those scripts/yaml somewhere (on github), this could be doable?
17:10:12 <imdigitaljim> but simplying being able  to do --labels extra_manifests="mycode.sh"
17:10:23 <strigazi> can you see this? https://gitlab.cern.ch/cloud-infrastructure/magnum
17:10:38 <imdigitaljim> yeah we have access here
17:10:40 <cbrumm> Yes, thank you
17:11:15 <strigazi> imdigitaljim: not like this, but we want what you just mentioned too
17:11:57 <imdigitaljim> yeah we're not set on that design just the idea of having the feature
17:12:04 <strigazi> for example we added cephfs this week https://gitlab.cern.ch/cloud-infrastructure/magnum/commit/42d35211c540c9b86eb91fd1c042505dbddbfcef
17:12:44 <imdigitaljim> why not have a label that links to a single file with additional parameters to source?
17:12:47 <strigazi> imdigitaljim: flwang1 I think the extra script is better to be saved in the magnum DB or live as a config option
17:13:06 <imdigitaljim> so you can add n parameters on a single label
17:13:56 <flwang1> strigazi: i also realized using label for new feature config is hard to manage sometimes
17:14:10 <strigazi> imdigitaljim: we can have a generic field yes
17:14:16 <colin-> how has ceph been doing since you added it? curious about reliability of attaches/detaches/unexpected losses
17:14:18 <strigazi> one more json
17:14:41 <strigazi> colin-: scale tests are planned for july
17:15:10 <strigazi> we will try crach our testing cephfs basically :)
17:15:18 <strigazi> *crash
17:16:10 <strigazi> We just finished the implementation of cephfs-csi, so we can't tell for cephfs. Ceph however is going very well
17:16:34 <imdigitaljim> does anyone here use octavia with magnum currently?
17:16:41 <strigazi> flwang1: ^^
17:16:56 <cbrumm> good to hear, we have a storage team that might be looking at ceph in the future
17:16:59 <strigazi> we don't have octavia
17:17:22 <imdigitaljim> we've got plans to hit it (hopefully by the end of july ;D)
17:18:10 <cbrumm> since it has support in the cloud controller already I'm not sure there will be much magnum work for that
17:18:22 <strigazi> so we have 10PB out 15 PB in use on two ceph clusters
17:18:29 <cbrumm> might have to just add a switch for octavia vs neutron
17:19:01 <flwang1> strigazi: we're using octavia
17:19:10 <flwang1> we just release it yesterday on one region
17:19:10 <colin-> curious, is CERN using another LoadBalancer provider integrated with the in-tree Kubernetes/Openstack libraries? Perhaps a hardware device+driver?
17:19:11 <imdigitaljim> flwang1: good to know :)
17:19:22 <flwang1> now we're going to upgrade heat, and then deploy magnum
17:19:24 <imdigitaljim> flwang1: i might need to pick your brain later
17:19:41 <strigazi> colin-: we use traefik and an in-house DNS lbaas
17:19:57 <colin-> strigazi: got it, thanks
17:20:02 <strigazi> colin-: so some nodes are labeled to run traefik
17:20:10 <strigazi> and these nodes use the same alias
17:20:19 <imdigitaljim> what are plans to upgrade heat templates in magnum to something like queens and do refactoring?
17:20:24 <strigazi> the alias is created manually, not very dynamic
17:20:34 <colin-> sounds interesting :)
17:21:10 <strigazi> imdigitaljim: what do you mean by upgrade, what changes do yo need?
17:21:17 <strigazi> imdigitaljim: what do you mean by upgrade, what changes do you need?
17:21:41 <imdigitaljim> we could use later heat template versions and new features with them
17:21:47 <imdigitaljim> to cleanup some workflows
17:22:04 <imdigitaljim> https://docs.openstack.org/heat/latest/template_guide/hot_spec.html
17:23:01 <strigazi> let's start the discussion to see which version works
17:23:47 <strigazi> in queens only filter is new?
17:23:55 <imdigitaljim> mostly right now we use Juno
17:23:56 <imdigitaljim> 2014-10-16
17:24:05 <imdigitaljim> so maybe not be 7 versions behind
17:24:25 <imdigitaljim> theres a lot more than filter ;)
17:24:40 <strigazi> The minimum that we should go is Newton which is not eol
17:24:58 <flwang1> i think newton is eol now
17:25:12 <strigazi> ok, I see the diff
17:25:53 <flwang1> can we have a etherpad to track this ideas?
17:25:55 <strigazi> pike has quite a lot of things
17:26:03 <imdigitaljim> i was hoping pike
17:26:12 <flwang1> so that we can plan these work in Stein
17:26:27 <imdigitaljim> (pike at minimum)
17:26:29 <strigazi> flwang1: we have the log of the meeting we can add stories
17:26:33 <imdigitaljim> for queens+
17:26:38 <flwang1> ok, fair enough
17:27:03 <strigazi> #action add story to upgrade the heat-template version to pike or queens
17:27:42 <strigazi> #action add story for post create deployments
17:28:11 <strigazi> we have a story for the cloud-provider already
17:28:13 <flwang1> thanks strigazi
17:28:14 <imdigitaljim> kube-proxy/kubelet optionally included for master
17:28:15 <imdigitaljim> ?
17:28:34 <strigazi> imdigitaljim let's add them back
17:28:36 <flwang1> imdigitaljim: if you're using calico, we have alreay added kubelet on master
17:28:39 <strigazi> flwang1: thoughts?
17:28:51 <flwang1> strigazi: i'm OK with that ;)
17:29:02 <imdigitaljim> all CNI's should be separated from these
17:29:03 <flwang1> imdigitaljim: what's the network driver you're using btw?
17:29:17 <imdigitaljim> we use calico so we didnt have the kubelet problem
17:29:25 <cbrumm> Calico
17:29:31 <strigazi> #action add story to put kubelet/proxy back in the master nodes
17:29:49 <flwang1> imdigitaljim: good to know you're using calico :D
17:29:56 <imdigitaljim> https://review.openstack.org/#/c/576623/
17:30:12 <imdigitaljim> strigazi: that was where the confusion came from here
17:30:26 <imdigitaljim> we should generally decouple master components from CNI
17:31:28 <strigazi> flannel doesn't need the proxy on the master but if the master node wants to ping a service it is required
17:32:06 <flwang1> strigazi: so let's get them back?
17:32:13 <strigazi> We can have a script per CNI, flwang1 started it
17:32:17 <strigazi> flwang1: yes
17:32:20 <imdigitaljim> yeah that sounds great
17:32:27 <strigazi> I'll add the story
17:32:45 <flwang1> strigazi: then my next question is should it be backported to queens?
17:33:17 <flwang1> given we removed them in queens
17:33:45 <strigazi> flwang1: we can
17:33:56 <strigazi> We can ask in the ML
17:34:06 <strigazi> but it also depends on what we need
17:34:27 <strigazi> cern, catalyst, blizzard, other users
17:34:54 <strigazi> I think it's good, the use cases for it, increased quite a lot
17:35:53 <flwang1> cool cool
17:35:53 <imdigitaljim> sidenote: we also have someone here working on magnum for gophercloud
17:36:00 <strigazi> cern doesn't have a big stake at it, we cherry-pick anyway what we need
17:36:16 <strigazi> imdigitaljim: \o/
17:36:21 <flwang1> imdigitaljim: i already have a patch for gophercloud to support magnum
17:36:22 <strigazi> imdigitaljim: to add the client?
17:36:26 <flwang1> just doing some testing now
17:36:32 <flwang1> strigazi: yes
17:36:42 <flwang1> so that user can use terraform to talk to magnum api
17:36:47 <strigazi> all I hear is autoscaling
17:37:04 <flwang1> and i'm also working on support mangum in ansible/shade
17:37:07 <flwang1> imdigitaljim: ^
17:37:17 <cbrumm> flwang1: I'll make sure our guy contacts you about gophercloud
17:37:42 <flwang1> cbrumm: no problem
17:37:43 <strigazi> please cc me in github
17:37:48 <cbrumm> ansible/shade is good, we will likely need terraform too
17:38:05 <flwang1> strigazi: for what? the gophercloud work?
17:38:11 <strigazi> yes, you have pr?
17:38:16 <strigazi> yes, you have PRs?
17:38:23 <flwang1> not yet, in my local
17:38:24 <imdigitaljim> yeah let me know and i can connect you with our engineer on gophercloud
17:38:44 <flwang1> i will start to submit probably since today or early next week
17:39:08 <cbrumm> good to know, we don't want to duplicate effort
17:39:49 <flwang1> #link magum support in gophercloud https://github.com/gophercloud/gophercloud/issues/1003
17:40:31 <imdigitaljim> user on that link JackKuei is working with us
17:40:33 <cbrumm> Yeah, JackKuei is our guy at the bottom of that
17:40:53 <colin-> flwang1: for your Calico deployment are you using the iptables or ipvs method?
17:41:02 <strigazi> flwang1: you talked to him already
17:41:25 <flwang1> imdigitaljim: nice, i will interlock with him/her
17:41:35 <imdigitaljim> oh just curious question, do you use systemd vs cgroupfs and why?
17:41:52 <mordred> flwang1: ++ magnum support in ansible/shade
17:41:57 <imdigitaljim> related here
17:41:58 <imdigitaljim> https://review.openstack.org/#/c/571583/
17:42:06 <flwang1> mordred: hah, good to see you here
17:42:11 * mordred lurks everywhere
17:42:11 <strigazi> imdigitaljim: systemd because it the default in docker in fedora
17:42:25 <flwang1> mordred: i'm still waiting for your answer about the upgrade
17:42:45 <flwang1> colin-: can we talk offline? i need to check the config
17:42:48 <strigazi> imdigitaljim: we will move to cgroupfs with this option
17:42:50 <colin-> for sure
17:43:23 <imdigitaljim> i havent heard from brtknr but i'd like to get this pushed through
17:43:24 <mordred> flwang1: oh - I responded in channel this morning ... but maybe the irc gods were not friendly to us
17:43:57 <mordred> flwang1: http://eavesdrop.openstack.org/irclogs/%23openstack-sdks/latest.log.html#t2018-06-21T11:42:24
17:43:59 <flwang1> mordred: oh, ok, i'm on laptop now, i will check on my workstation later
17:44:03 <flwang1> thank you!
17:44:05 <mordred> flwang1: sweet
17:44:11 <imdigitaljim> strigazi: would i be able to complete this story on his behalf? https://review.openstack.org/#/c/571583/
17:44:27 <flwang1> mordred: i will start the coding work soon, pls review it
17:45:05 <strigazi> I don't think brtknr will mind you can push a patchset, I tested PS3
17:45:06 <flwang1> imdigitaljim: i would suggest hold a bit because i think brtknr is still active
17:45:22 <strigazi> you can leave a comment in gerrit
17:45:58 <strigazi> the patch is in good state
17:46:11 <imdigitaljim> flwang1: sounds good
17:47:04 <imdigitaljim> the patch would technically break ironic
17:47:26 <imdigitaljim> i just was gonna add a compatibility portion of it
17:47:43 <strigazi> Do you use it? the ironic driver?
17:48:01 <imdigitaljim> no, just didnt know if that was a concern
17:48:12 <strigazi> We use the fedora-atomic one with ironic
17:48:13 <imdigitaljim> or would/could we consider reducing some of our support of other drivers
17:48:18 <flwang1> colin-: re your calico question, we're using iptables
17:48:31 <strigazi> We want to remove the ironic driver
17:48:33 <flwang1> colin-: IIRC, the ipvs is still in beta?
17:48:41 <imdigitaljim> strigazi: +1
17:49:25 <strigazi> folks, let's start wrapping up and plan for next week
17:49:48 <colin-> flwang1: good to know thanks for checking!
17:49:56 <cbrumm> Thank you all for having this meeting at this time
17:50:11 <imdigitaljim> well great meeting! thanks for accommodating us with this meeting, we really appreciate it
17:50:23 <flwang1> mordred: read your comments, sounds good idea, i will go for that direction, thanks a lot
17:50:26 <strigazi> cbrumm: thank you for joining, it was great to see you all
17:50:52 <colin-> nice chatting with you folks, will idle here more often
17:50:55 <colin-> nice to be back on irc
17:51:04 <strigazi> colin-: :)
17:51:06 <flwang1> good to see you guys
17:51:18 <flwang1> it's 5:51 AM, very dark
17:51:30 <flwang1> waves from NZ
17:51:33 <mordred> flwang1: woot!
17:51:34 <imdigitaljim> o/
17:51:56 <cbrumm> enjoy your friday flwang1
17:52:02 <mordred> flwang1: it also is probably a better idea to focus on the copy of the shade code in the openstacksdk repo ...
17:52:03 <strigazi> flwang1 is a hero
17:52:30 <flwang1> strigazi:  i need a medal
17:52:33 <mordred> flwang1: the ansible modules use it now - and I'm hoping to work soon on making shade use the code in sdk - so less churn on your part
17:52:36 * mordred hands flwang1 a medal
17:52:45 <strigazi> ok, I think it is better to summarize the meeting in an email
17:52:58 <strigazi> there quite some content
17:53:19 <flwang1> mordred: so do you mean implement the code in openstacksdk instead of in shade? sorry if it's a newbie question
17:53:36 <strigazi> flwang1: definitely, I can bring you a CERN helmet in Berlin :)
17:54:27 <flwang1> strigazi: logged
17:54:31 <flwang1> and beer
17:55:04 <strigazi> blizzard folks you should pass by here and pick yours ;)
17:55:22 <imdigitaljim> :D
17:55:37 <strigazi> flwang1: deutsche beer
17:55:37 <flwang1> can I get a logo of blizzard?
17:55:46 <strigazi> last action:
17:56:05 <strigazi> #action strigazi to summarize the meeting in a ML mail
17:57:11 <imdigitaljim> flwang1: you mean like this? https://bit.ly/2MdWCOn
17:57:33 <flwang1> thank you! haha
17:58:03 <strigazi> Let's end the meeting then :) Thank you all!
17:58:28 <strigazi> #endmeeting