15:00:17 <mattmceuen> #startmeeting openstack-helm
15:00:21 <openstack> Meeting started Tue Jul 31 15:00:17 2018 UTC and is due to finish in 60 minutes.  The chair is mattmceuen. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:22 <mattmceuen> GM/GE all!
15:00:23 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:25 <openstack> The meeting name has been set to 'openstack_helm'
15:00:27 <mattmceuen> #topic Rollcall
15:00:42 <jayahn> o/
15:00:45 <srwilkers> o/
15:00:48 <mattmceuen> Here's our agenda: https://etherpad.openstack.org/p/openstack-helm-meeting-2018-07-31
15:00:51 <mattmceuen> have at it
15:00:53 <anticw> o/
15:01:14 <mattmceuen> howdy anticw srwilkers jayahn
15:01:35 <portdirect> o/
15:01:42 <gmmaha> o/
15:01:57 <jgu__> 0/
15:02:25 <mattmceuen> Alright
15:02:35 <mattmceuen> #topic Use Case for external Ceph Cluster
15:02:46 <jayahn> we heard from the previous meeting. :)
15:03:17 <jayahn> did PS on document, it might need to fix some of English, and reviews from you guys.
15:03:17 <mattmceuen> Jaesang gave us a patchset for documenting external ceph -- https://review.openstack.org/#/c/586992/
15:03:31 <mattmceuen> I have not taken a look yet, but will!
15:03:42 <jayahn> pls give us review.. will follow up.
15:03:46 <anticw> i think the idea is sound, how much testing have we done for that?  do we want/need a gate for it?
15:04:16 <anticw> it's documentation + 1 optional gate script at present right?
15:04:17 <portdirect> anticw: it would be nice to get it gated for sure
15:04:18 <jgu__> jayahn: nice, will review too.
15:04:33 <portdirect> but i think that could follow?
15:04:42 <jayahn> gate script.. need an existing ceph cluster.
15:04:59 <lamt> o/
15:04:59 <portdirect> you could use a helm deployed one
15:05:10 <mattmceuen> That's a really good idea
15:05:12 <jayahn> okie. i htink that is also very doable,
15:05:21 <anticw> ok, follow-up works for me
15:05:21 <portdirect> and just do all the setup manually for openstack, rather than deploy the helper chart
15:05:44 <mattmceuen> Doc + Gate > Doc :)
15:05:53 <mattmceuen> And the hard part is done already
15:05:58 <portdirect> but Doc > no doc :)
15:06:19 <mattmceuen> but No Doc > Dental Work
15:06:21 <mattmceuen> Moving on
15:06:28 <anticw> when i scanned it earlier i didn't see anything about which ports are required
15:07:16 <mattmceuen> Sorry, I don't follow anticw
15:07:26 <anticw> np, i comment in the PS .. you can move on
15:07:35 <mattmceuen> Ok cool - thanks dude
15:07:49 <mattmceuen> #topic Rally Test Followup
15:07:55 <mattmceuen> Go for it jayahn !
15:08:11 <jayahn> nothing more than what I wrote.
15:08:30 <mattmceuen> https://review.openstack.org/#/c/582463/ >> Add rally test gate.
15:08:30 <mattmceuen> https://review.openstack.org/#/c/586783/ >> Upgrade rally to 1.2.0, and test scenario cleanup
15:08:35 <jayahn> did rally test gating job, and upgraded rally to the most recent one, and clean up the scenairo
15:09:24 <jayahn> pls review, and let us know if there is more followup works necessary.
15:09:32 <srwilkers> nice.
15:09:44 <srwilkers> im going to kick the tires on this today
15:09:48 <anticw> i see one of the gate runs exploded, rabbitmq ?
15:10:14 <portdirect> yeah - I'd really like to see it pass once before we merge :(
15:10:25 <srwilkers> ive noticed the rabbitmq tests via armada have been a bit shaky
15:10:31 <srwilkers> to the point ive disabled them locally
15:10:32 <portdirect> on the other one, looks great - though wehere does that image come from?
15:10:35 <anticw> http://logs.openstack.org/63/582463/7/check/openstack-helm-armada-fullstack-deploy/2c4dbd0/job-output.txt.gz#_2018-07-27_01_49_32_386805 (for reference)
15:11:11 <portdirect> whats super werid about these it when you look at the log, it almost loks like the wring image is being used sometimes
15:11:23 <mattmceuen> omy
15:11:29 <anticw> yeah, i don't really have a strong opinion either way, i would merge as-is :)
15:11:51 <portdirect> http://logs.openstack.org/63/582463/7/check/openstack-helm-armada-fullstack-deploy/2c4dbd0/primary/pod-logs/openstack/osh-cinder-rabbitmq-test/osh-cinder-rabbitmq-rabbitmq-test.txt.gz
15:11:59 <anticw> right
15:12:32 <mattmceuen> I'd say we definitely want to see the gate passing first
15:12:51 <anticw> how about we take this to #openstack-helm and talk about that specific error and why it might be occuring?
15:13:02 <anticw> i'm assuming once we clear that we can +2 and merge?
15:13:12 <portdirect> id say so
15:13:22 <mattmceuen> Yep, once we figure that out, sounds good to me
15:13:40 <portdirect> to be clear, the whole gate would need to run green
15:13:48 <anticw> that's the only error i see
15:14:33 <mattmceuen> Anything else on the Rally topic?
15:14:36 <srwilkers> that gate is also the only one that exercises a rabbitmq-per service, so i'd like to see it pass with the rally changes being added for the sake of sanity and curiosity
15:16:21 <mattmceuen> ++
15:16:31 <mattmceuen> #topic FWaaS
15:16:47 <jayahn> I got the answer. :)
15:16:53 <jayahn> from etherpad
15:16:57 <mattmceuen> Yep, just catching up on that now :)
15:17:12 <mattmceuen> Thanks portdirect
15:17:28 <mattmceuen> #topic Calico v2 -> v3 transition
15:17:46 <mattmceuen> anticw is working toward adapting the OSH-Infra Calico chart to support Calico v3
15:18:13 <mattmceuen> It will likely be a breaking upgrade of Calico, so I wanted to socialize that among everyone to make sure everyone is aware
15:18:40 <mattmceuen> Any thoughts / concerns, and anything else you'd add to enlighten us anticw?
15:19:12 <anticw> adding to this ... it looks like in theory it should be possible to upgrade-in-place from 'v1 api' to the current 'v3 api' though testing of that hasn't worked well
15:19:41 <portdirect> was there a v2?
15:19:43 <anticw> the newer chart for dev/testing doesn't come with it's owen etcd anymore, and some of the configuration of ipip, mesh, asn, etc has moved from calicoctl config xxx to yaml
15:19:47 <anticw> no v2
15:19:55 <anticw> v2 api only works over ipv5
15:20:00 <portdirect> roger
15:20:42 <mattmceuen> so calico v3 uses the k8s etcd?
15:20:44 <anticw> on the whole 3.1 is cleaner but a bit different ... we have strong reasons to upgrade, we need some of the newer policy stuff and i don't think anyone is really using older calico
15:21:01 <portdirect> its totally unsupported now afaik?
15:21:07 <anticw> mattmceuen: for developers is can ... for production the discussion i've had is that we will use a separate etcd
15:21:16 <mattmceuen> excellent
15:21:40 <portdirect> have you looked into using k8s for state storage (crd?) as opposed to directly hitting etcd?
15:21:49 <anticw> i don't think mark is here (?) but he commented that from a credentials PoV it would be better to have a separate etcd ... and for larger clusters probably better for load
15:22:03 <anticw> portdirect: not yet
15:22:13 <portdirect> that would solve the creds issue
15:22:36 <anticw> i think for production unless there is a strong reason not-too we would have a separate etcd cluster
15:22:46 <portdirect> and i think also is recommened for large clusters
15:22:57 <portdirect> but as long as we have the option of toggle wfm :)
15:23:20 <anticw> 50+ nodes
15:23:37 <anticw> so again we need to separate for production but dev/testing doesn't
15:24:21 <portdirect> thats still small ;)
15:24:32 <portdirect> big in k8s = 1000
15:25:08 <anticw> sure, but people usually partition before that point
15:25:23 <anticw> ok ... so ... no other questions on that?
15:26:05 <mattmceuen> None here
15:26:09 <mattmceuen> Thanks anticw
15:26:22 <mattmceuen> #topic Core Reviewers
15:26:29 <mattmceuen> Take it away portdirect
15:26:59 <portdirect> it just came to my attention that there is some cw guy in gerrit
15:27:07 <portdirect> and hes doing really good work
15:27:18 <portdirect> leading both in reviews
15:27:28 <portdirect> and direction, eg doing things like calico v3
15:27:41 <mattmceuen> What is calico v3, I haven't heard of that one
15:27:41 <portdirect> and also helping out a lot in irc
15:27:57 <portdirect> he never seems to turn up to the meetings though
15:28:10 <mattmceuen> I will say that many of my most substantial and valuable and thorough reviews have come from that guy
15:28:49 <mattmceuen> Thank you for the thought portdirect - I will take this into consideration
15:29:19 <portdirect> mattmceuen: now ive forced your hand, pretty please can we get a mail out on the ml.
15:29:31 <mattmceuen> I said "consideration"
15:29:37 <mattmceuen> I will send out an email :)
15:29:42 <portdirect> -1
15:29:44 <portdirect> :P
15:30:14 <mattmceuen> #topic PS Needing Review
15:30:30 <mattmceuen> https://review.openstack.org/#/c/585982/ >> Fix ceph version check error in jewel version.
15:30:30 <mattmceuen> https://review.openstack.org/#/c/581980/ >> Tempest: change manifests.pvc to pvc.enabled from pvc-tempest.yaml
15:30:30 <mattmceuen> https://review.openstack.org/#/c/580272/  >> Running agents on all nodes
15:30:30 <mattmceuen> https://review.openstack.org/#/c/586954/ >> make it possible to use "node-role.kubernetes.io/ingress: true" as node label
15:30:39 <mattmceuen> In addition to the ones mentioned earlier!
15:30:43 <jayahn> kudos on "new" core...
15:31:03 <mattmceuen> No kudos!  There is a process
15:31:12 <mattmceuen> I consider portdirect to have offered a suggestion
15:31:22 <jayahn> since that mail will be out while I am sleeping.
15:31:23 <mattmceuen> Potential kudos later?
15:31:26 <mattmceuen> :D
15:31:30 <jayahn> I did a bit earlier.
15:31:40 <jayahn> :D
15:32:17 <mattmceuen> We did a good job getting some stuck reviews unstuck last time
15:32:30 <jayahn> i know. thanks everyone.
15:32:34 <mattmceuen> Let's get some eyeballs on these PS today or tomorrow!
15:32:38 <srwilkers> would still like some eyes and thoughtful feedback on this one: https://review.openstack.org/#/c/559417/
15:32:51 <srwilkers> needs a rebase, but still
15:33:10 <mattmceuen> That's still on my to-play-with list, sorry srwilkers :(
15:33:22 <anticw> srwilkers: i think for larger self-contained things it'a hard to get eyeballs
15:33:52 <srwilkers> anticw: yeah, it is.  i consider this one pretty important for elasticsearch's long term health
15:34:05 <srwilkers> as without it, we're stuck with the trashy pvc implementation i introduced originally
15:34:25 <anticw> well, we want to use s3 for other things as well so it's good to have that
15:34:35 <srwilkers> yep
15:34:59 <anticw> other than installation is there anything that needs to be done to test it?
15:35:32 <anticw> it looks reasonable, the gates seem ok with it ...
15:35:51 <srwilkers> http://logs.openstack.org/01/572201/12/check/openstack-helm-armada-fullstack-deploy/db2940f/primary/pod-logs/osh-infra/elasticsearch-s3-bucket-ks9lx/create-s3-bucket.txt.gz
15:36:10 <srwilkers> http://logs.openstack.org/01/572201/12/check/openstack-helm-armada-fullstack-deploy/db2940f/primary/pod-logs/osh-infra/elasticsearch-register-snapshot-repository-dp6qq/register-snapshot-repository.txt.gz
15:37:26 <srwilkers> im working on getting the docs to a place they're functional, as a lot of the big functional changes across the stack have been introduced save for that one
15:37:41 <mattmceuen> nice
15:37:54 <anticw> wfm (even as-is)
15:38:30 <portdirect> i left a fe wcomments, looks solid from a workflow pov, but a few things could do with cleaning up
15:38:40 <srwilkers> cool, thanks anticw and portdirect
15:38:47 <portdirect> once we have that would be great to abstract out to htk
15:38:56 <portdirect> so other services could benifit from this
15:40:02 <mattmceuen> good idea
15:40:09 <mattmceuen> Ok --
15:40:12 <anticw> that works as a follow-up
15:40:13 <mattmceuen> #topic Roundtable
15:40:16 <mattmceuen> I have one item
15:40:39 <mattmceuen> As I mentioned in the mailing list, I've decided to pass the PTL baton for the next cycle!
15:40:53 <mattmceuen> It has been a pleasure working with you all, and I won't be going anywhere
15:41:16 <jayahn> I just saw the email.. how can I live without you on openstack-helm. :(
15:41:17 <mattmceuen> Aside from focusing on Airship a bit more from a work focus perspective
15:41:33 <mattmceuen> I may get a tshirt that quotes you jayahn!  Thank you!
15:41:34 <srwilkers> jayahn: alcohol
15:41:44 <jayahn> good one. :)
15:41:55 <mattmceuen> I will still be very active in OSH do not worry
15:42:06 <mattmceuen> That is all from me :)
15:42:43 <john_W> https://gerrit.mtn5.cci.att.com/#/c/47079/
15:42:43 <jayahn> all the people in skt team will have alcohol to overcome this absence. seriously. :)
15:42:44 <john_W> can i ask to get some eyeballs on a few PS for cloud Core
15:43:01 <john_W> https://review.openstack.org/#/c/577298/
15:43:11 <john_W> https://review.openstack.org/#/c/577293/
15:43:25 <john_W> Tin has been waiting a while for some feedback
15:43:29 <mattmceuen> Thanks john_W!
15:43:42 <john_W> thank you all and Matt - i will certainly miss you
15:43:55 <anticw> re: readiness checks ... my comment here and also on the PS was i don't think we should be so aggressive
15:44:06 <anticw> it feel like the cluster will spend more time healthchecking than doing useful work
15:44:10 <srwilkers> jayahn: are you coming to the denver ptg?
15:44:19 <gmmaha> thanks for steering this ship through rough waters mattmceuen :)
15:44:37 <anticw> mattmceuen: thanks for your efforts so far
15:44:47 <jayahn> srwilkers: not sure. I have to solve a budge problem
15:44:48 <srwilkers> mattmceuen: bye felicia
15:45:06 <jayahn> budget
15:45:08 <mattmceuen> thx anticw gmmaha john_W srwilkers :)
15:45:16 <mattmceuen> jayahn d'oh :(
15:45:53 <jgu__> could I poke a question if osh already supports external load balancer? I could have missed in the doc or is it some to be added?
15:46:47 <portdirect> jgu__: we have not done any work with external lb
15:47:01 <portdirect> either a cloud provider provided one, or things like f5
15:47:22 <mattmceuen> Nothing should prevent the work from being done - just noone's done it :)
15:47:32 <portdirect> simply as far as im aware no one to date has had access to them, or the need
15:47:41 <jgu__> we needed to expose the openstack service end points off the cluster ndoes. is there a better way to do that in OSH other than thru external LB?
15:47:48 <portdirect> but would be nice to have for sure :)
15:48:16 <portdirect> jgu__: we use the ingress controllers as our lb, from within the cluster
15:48:29 <jayahn> we also use ingress controllers
15:48:38 <portdirect> if you set them up as daemonsets on a set of nodes at the edge
15:48:52 <jayahn> soon, we will get our hands on F5 though.
15:48:55 <portdirect> then you can direct traffic to them
15:49:17 <portdirect> recently we added support for using keepalived to create a vip
15:49:21 <portdirect> which is really nice
15:50:30 <jgu__> thanks jayahn and portdirect. is there any pointers how to set up the ingress controller or this purpose?
15:51:00 <portdirect> if you have the supporting infra - the work cw and alanmeadows did also allows you to set up bgp peering of a vip created on each node to the fabric
15:51:02 <jgu__> the charts provisioned the cluster ip for jeystone for example, but the cluster ip or host name is not accessible off the clsuter
15:51:37 <portdirect> jgu__: this needs some update, and does not include the above methods: https://docs.openstack.org/openstack-helm/latest/install/ext-dns-fqdn.html
15:51:42 <portdirect> but is a good starting point
15:51:51 <jgu__> thanks portdirect!
15:52:15 <jayahn> https://sktelecom-oslab.github.io/Virtualization-Software-Lab/ExposeService/ >> this is written in Korean, but you can use google translation just to get an idea. :)
15:52:31 <jayahn> can be supplemental info.
15:52:54 <mattmceuen> awesome - thanks for the references
15:53:09 <jgu__> or I can ask my boss to pay for my Korean language classes. thanks Jayahn
15:53:12 <jgu__> :-)
15:54:04 <mattmceuen> Any other topics guys?
15:54:12 <portdirect> korean docs
15:54:24 <portdirect> jayahn: does docs.openstack.org support korean docs?
15:54:41 <jayahn> yeah, but not for every project.
15:54:54 <portdirect> can we get some for osh? :D
15:55:06 <portdirect> would mean that we could get this done a bitt better
15:55:14 <portdirect> your blogs have awesome stuff in them
15:55:31 <portdirect> we could use the english speakers here to to the tx to english
15:55:41 <portdirect> if we get the content in gerrit
15:55:50 <jayahn> I have tried.. there was some road block on translation side to include osh as a project i18n team can translate..
15:55:57 <jayahn> i will check again.
15:56:28 <portdirect> if theres anything we can do to unblock
15:56:55 <jayahn> I know members from doc / i18n team. I will check.
15:58:00 <portdirect> we should also get this on the ptg agenda
15:58:04 <mattmceuen> ++
15:58:51 <mattmceuen> K folks, we're about out of time -- any closing thoughts?
15:59:15 <mattmceuen> Thanks everyone!  Great meeting - have a good week
15:59:18 <mattmceuen> #endmeeting