15:00:15 <mattmceuen> #startmeeting openstack-helm
15:00:19 <openstack> Meeting started Tue Jul 17 15:00:15 2018 UTC and is due to finish in 60 minutes.  The chair is mattmceuen. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:20 <mattmceuen> #topic Rollcall
15:00:20 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:22 <openstack> The meeting name has been set to 'openstack_helm'
15:00:37 <mattmceuen> good time-of-day, everybody
15:00:56 <portdirect> merry Christmas!
15:01:04 <jayahn> o/
15:01:08 <mattmceuen> Here's our agenda for today, please add in anything additional you'd like to discuss!  https://etherpad.openstack.org/p/openstack-helm-meeting-2018-07-17
15:01:11 <jayahn> ? christmas?
15:01:25 <mattmceuen> wishful thinking - it is hot in saint louis
15:01:34 <portdirect> i celebrate Christmas every day of the year.
15:01:44 <portdirect> its why im so jolly
15:01:51 <mattmceuen> I will be expecting my present portdirect
15:01:56 <jayahn> i rather want to celebrate christmas eve everyday..
15:02:04 <portdirect> fair
15:02:05 <gema> o/
15:02:05 <gmmaha> o/
15:02:11 <jayahn> so that i can get a gift from santa everyday. :)
15:02:19 <robertchoi80> hi
15:02:20 <tdoc> time appropriate greetings
15:02:26 <portdirect> oh - in usa/uk its christmas day you get that
15:02:50 <jayahn> that is true. early in the morning..
15:02:51 <jayahn> my mistake..
15:03:13 <mattmceuen> I still haven't figured out exactly what time santa comes down the chimney, I have a hard time staying awake
15:03:16 <mattmceuen> jayahn could do it
15:03:27 <mattmceuen> #topic Follow-up on Rally test gating
15:03:34 <mattmceuen> https://review.openstack.org/#/c/582463/
15:03:35 <jayahn> does US government tracks santa every year?
15:03:40 <jayahn> ah.. rally one.
15:03:50 <jayahn> robertchoi80 will do.
15:03:58 <mattmceuen> ^this guy -- big thanks to Jaesang on that
15:04:06 <mattmceuen> It's high on my list to give this a spin
15:04:14 <robertchoi80> sounds good
15:04:56 <robertchoi80> so, do you agree that we need a separate rally test on gate?
15:05:06 <portdirect> i would move to experimntal
15:05:13 <portdirect> just as we have doen with tempest
15:05:22 <jayahn> we would like to ask. what would be difference between "rally test in each chart" vs. "rally test with a seperate chart like https://review.openstack.org/#/c/582463/.
15:05:24 <portdirect> as otherwise i fear most failures will be oom
15:05:26 <portdirect> eg: http://logs.openstack.org/63/582463/5/check/openstack-helm-armada-fullstack-deploy/860eb5f/job-output.txt.gz#_2018-07-16_00_07_15_419452
15:06:11 <portdirect> jayahn: i think "rally test in each chart" would be better phased as "helm test on each chart"
15:06:21 <jayahn> yes, correct
15:06:27 <portdirect> it just happens that most helm tests are implemented via rally
15:06:33 <mattmceuen> do we expect rally to use as much memory as tempest?
15:06:35 <jayahn> ture as well
15:06:43 <jayahn> true..
15:06:46 <portdirect> so - helm test should be used (as per spec) to test basic chart functionaility
15:07:00 <portdirect> i see great value in adding a rally test on the deployment
15:07:12 <portdirect> as it can test the deployment as a whole
15:07:32 <robertchoi80> that's true
15:07:41 <portdirect> whereas we keep the helm tests limited to a tool that DE's and OPs can run in prod sites to perform a quick sanity check on the service
15:07:52 <jayahn> agreed. we can do more thorough test, but you said that you rather have a simple sanity check  for the deployment.
15:07:54 <portdirect> (oh and Devs as well!)
15:08:36 <jayahn> for ops, we are planning to provide more thorough testing via rally. but I am curious we need that level of test on gating.
15:09:06 <robertchoi80> ah, I guess the "simple sanity check" meant 'helm test'
15:09:17 <portdirect> yup
15:11:00 <mattmceuen> I like running as much testing as frequently as we can w/o getting in the way
15:11:26 <mattmceuen> Do you know how much time the rally testing adds to the armada gate?
15:11:59 <robertchoi80> that will depends on the number of rally test case.
15:11:59 <rwellum> o/ (late - sorry)
15:12:04 <robertchoi80> I should ask to Jaesang
15:12:59 <robertchoi80> the current number of test cases for that rally chart isn't that large, and we can increase the number
15:13:00 <mattmceuen> cool - would be great to do it frequently.  But agree that until we can run it reliably in openstack infra it should be experimental
15:13:24 <robertchoi80> that's totally fine
15:13:29 <jayahn> no problem there
15:14:08 <mattmceuen> robertchoi80 also brought up this PS:  https://review.openstack.org/#/c/582338/
15:14:13 <mattmceuen> That's a good one to be aware of
15:14:35 <mattmceuen> Armada is close to running "helm test" by default, as that's a best practice
15:15:03 <robertchoi80> yes. that PS looks good to me
15:15:12 <mattmceuen> Since Armada can't tell whether a chart supports helm test ahead of time, this means we'll need to turn -off- helm tests for charts that don't support it
15:15:16 <mattmceuen> (explicitly)
15:15:22 <marshallmargenau> not quite!
15:15:27 <mattmceuen> oho!
15:15:31 <marshallmargenau> if there is no helm test, armada will still pass it
15:15:44 <mattmceuen> well that's great!
15:15:45 <marshallmargenau> it reports back "No Tests Found" i believe, from tiller
15:15:58 <marshallmargenau> you will need to explicitly disable any helm tests that are not working
15:16:01 <robertchoi80> yes, that's right
15:16:30 <mattmceuen> I've never been so happy to be so wrong.  Thanks marshallmargenau :)
15:16:48 <marshallmargenau> one more there's a semi-related change that folks may be interested in with https://review.openstack.org/#/c/581944/ (merged)
15:17:07 <marshallmargenau> you can now apply your wait labels without a timeout, which will fall into standard chartgroup processing flow
15:18:17 <robertchoi80> thanks, I'll take a look
15:18:19 <marshallmargenau> i.e. chartgroups will still wait until everything is happy, so they can use your labels now instead of just the namespace (from before this PS)
15:18:38 <mattmceuen> that's excellent
15:19:28 <mattmceuen> robertchoi80 anything else you'd like to chat about on this topic?
15:19:57 <portdirect> marshallmargenau / robertchoi80 : would you guys be able to update the osh manifests to take advantage of this feature?
15:20:08 <portdirect> would be awesome to get in the gates
15:21:04 <robertchoi80> ah, sure. I'll tell Jaesang to do that :)
15:21:15 <mattmceuen> haha
15:21:40 <jayahn> robertchoi80 is becoming like me..... :(
15:21:51 <robertchoi80> haha
15:21:55 <mattmceuen> thanks robertchoi80 (and Jaesang in advance.....) love to take advantage of the lates armada features
15:22:19 <mattmceuen> Alright - moving on!
15:22:52 <mattmceuen> #topic Glance backing storage
15:22:57 <jayahn> Jaesang is getting married this satureday. probably will take some time to do that. :)
15:23:05 <mattmceuen> piotrrrr says: glance/deployment-api.yaml supports 'pvc' and 'rbd' storage out of the box. Do we want to add support for hostPath? This will allow for using host-mounted networked storage for storing glance images, e.g. GPFS.
15:23:14 <mattmceuen> OH my!!!
15:23:38 <mattmceuen> we could give Jaesang a little time off I suppose -- please tell him Congrats!!!
15:23:50 <jayahn> I will do. thanks. :)
15:24:52 <robertchoi80> let's move on
15:24:53 <mattmceuen> for host-mounted network storage being used to back glance (rather than pvc or rbd) -- can that be done using a pvc driver for the networked storage?
15:26:02 <piotrrrr> if you have a GPFS pvc driver, in theory yes, but (1) i don't think there is one (2) to take the advantage from copy-on-write in GPFS between glance and cinder you want your images to be stored in a well known location in your GPFS
15:26:27 <piotrrrr> as cinder's GPFS drivers needs to be then told where to look for those images
15:27:24 <piotrrrr> and ofcourse GPFS is only on of the usecase, there's probably other networked storage solution which do not have pvc driver. Having a hostPath support as a fallback would potentially cover all of them
15:28:40 <mattmceuen> I'm not against this.  One thought, though - would it be effort well-spent to add additional, well-behaved pvc drivers for things like GPFS?
15:28:57 <portdirect> piotrrrr: why not just use a local pv type?
15:29:14 <portdirect> there should be a provisioner for that i think
15:29:33 <portdirect> https://github.com/kubernetes-incubator/external-storage/tree/master/local-volume
15:30:15 <piotrrrr> mattmceuen: personally I wouldn't know where to start with that. But also, and I wouldn't be able to say now if it would allow cinder's GPFS driver (talking directly to GPFS) to take the advantage of copy-on-write.
15:32:20 <piotrrrr> portdirect: I can't say I know much about the local persistent volumes. But again, my first concern would be possible lack of copy-on-write from glance to cinder. I might be wrong here though
15:33:31 <portdirect> its just a awy of using the pv/pvc interface to get local bindmounts
15:33:49 <portdirect> has the advantage of knowing which nodes can have the pod scheduled on it
15:34:04 <portdirect> and would also work with the current glance chart
15:34:30 <piotrrrr> I can look into that. though if the PVs land under different directories, then it might not work for copy-on-write (where, i suspect, all images have to be in a fixed directory)
15:34:31 <portdirect> by simply using the "pvc" backend, and specifiying the lass to match the local provisioner
15:34:39 <portdirect> roger
15:34:51 <portdirect> s/lass/class
15:35:08 <mattmceuen> Awesome.  Please bring it back up piotrrrr if that doesn't fit the bill for your copy-on-write use case
15:35:22 <piotrrrr> Will do, thanks.
15:35:53 <mattmceuen> #topic Supporting multiple versions of OpenStack
15:35:56 <tdoc> As a short side question, what's the status of COW between glance and cinder with Ceph?
15:36:53 <tdoc> (if anyone happens to know, otherwise let's move on)
15:37:09 <mattmceuen> Sorry I'm not sure tdoc - I'll add to the agenda as a topic for next time :)
15:37:15 <mattmceuen> and we can try to figure it out by then...
15:37:27 <portdirect> tdoc: if using rbd
15:37:36 <portdirect> then works as expected
15:38:05 <tdoc> ok, thanks
15:38:16 <mattmceuen> #topic Supporting multiple versions of OpenStack
15:38:32 <mattmceuen> piotrrrr poses: How will OSH handle supporting deployment of multiple different versions of openstack? values.yaml will differ between Newton and Rocky. E.g. no need to write out content of policy.json
15:39:02 <piotrrrr> Yeah, so basically I'm curious what's are the plans here.
15:39:10 <rwellum> +2 piotrrrr
15:39:10 <mattmceuen> First thing to be aware of -- release-specific reference overrides live here: https://github.com/openstack/openstack-helm/tree/master/tools/overrides/releases
15:39:36 <portdirect> these need to be expanded to include the default deltas that piotrrrr points out
15:39:57 <portdirect> but we currently gate on 3 releases of openstack
15:40:10 <portdirect> with miniumum common settings between them
15:40:12 <mattmceuen> Would the policy.jasn get in the way?  Or just get ignored?
15:40:16 <mattmceuen> (in Rocky)
15:40:20 <portdirect> no
15:40:31 <portdirect> rocky prob marks the point we need to move to osh 1.1
15:40:56 <portdirect> things like that mean our current charts work newton - queens
15:41:04 <rwellum> That link drives images versions - what drives behavior differences? Like cells etc.
15:41:04 <portdirect> but i think will break beyond that
15:41:04 <mattmceuen> what a segueway to our next topic
15:41:22 <portdirect> rwellum: cells is a great example
15:41:23 <portdirect> 2 sec
15:41:29 <jayahn> so it will be 1.1? not 2.0? .. any rules on versioning?
15:41:47 <piotrrrr> I see, so to rephrase: the plan is to have values.yaml have the values for lowest supported version of OS, and then maintain overrides for higher versions, and/or different types of deployments which apply the deltas needed, correct?
15:42:08 <portdirect> https://github.com/openstack/openstack-helm/blob/master/nova/templates/bin/_db-sync.sh.tpl#L21-L47
15:42:48 <portdirect> jayahn: the final spec is being worked on, along with the one for values over-rides
15:42:56 <portdirect> and layout
15:43:11 <portdirect> piotrrrr: essentially yes
15:43:12 <portdirect> :)
15:43:13 <piotrrrr> portdirect: ok, so overrides plus some version specific if/else in various scripts
15:43:37 <portdirect> yeah - we need to work out the max number of if/else we are prepared to go
15:44:00 <piotrrrr> ok, so maybe one last followup question, status of Queens overrides?
15:44:17 <portdirect> still in flight
15:44:22 <portdirect> I hope to have the final ps up today
15:44:51 <mattmceuen> thanks portdirect
15:44:56 <piotrrrr> cool, thanks guys, that answers my questions here
15:45:08 <mattmceuen> excellent
15:45:08 <jayahn> thanks portdirect. :)
15:45:35 <rwellum> Queens and Master?
15:45:40 <mattmceuen> One more thing to add re: 1.0 readiness -- reminder that we have a TODO list for 1.0 readiness:  https://storyboard.openstack.org/#!/worklist/341
15:46:07 <mattmceuen> Lots of open items on that list, and it would be really great if folks could volunteer for some more of them!
15:46:12 <jayahn> that clears my question on "current status of osh release)
15:47:10 <mattmceuen> piotrrrr - I think the rest of your q's in the agenda are all tied up in the in-flight versioning spec; anything else you want to bring up in that area before we move on?
15:47:41 <piotrrrr> I'm good, let's move on
15:47:45 <mattmceuen> cool
15:47:48 <mattmceuen> #topic Source code copyright attribution
15:48:01 <jayahn> before that..
15:48:24 <mattmceuen> floor is yours jayahn
15:48:26 <jayahn> we consider osh master branch as "stable"?
15:48:50 <mattmceuen> the goal of the 1.0 release is to get a stable interface for OSH
15:48:52 <jayahn> that was a quesiton from our ops team. they want to know what is policy of characterizing a branch.
15:48:53 <piotrrrr> (sidenote: the "Questions, Clarification, and Current Status of osh release" was somebody else's topic, the color did not get preserved when I created the new etherpad)
15:48:59 <mattmceuen> So some things may change till then
15:49:11 <jayahn> piotrrrr: that is mine
15:49:26 <mattmceuen> oops, sorry jayahn :)
15:49:38 <jayahn> no problem. :)
15:50:12 <jayahn> ops team is confusing on the concept of master branch on osh.
15:50:29 <mattmceuen> I would say we'll only change things in the OSH interface if it will add value (not arbitrarily) until we reach 1.0, but things may continue to change till 1.0
15:50:45 <jayahn> they simply asked "is everyhing on master branch considered as stable once there is release 1.0
15:51:51 <jayahn> and.. just one quick check, osh version will cover multiple charts as portdirect described just before, but we will tag based on openstack version?
15:52:10 <jayahn> oops will cover multiple openstack version..
15:52:50 <mattmceuen> I am dusting off my memory on Dublin discussion - I believe this will be addressed by portdirect's spec as well - but I think we will be tagging master along the way with recommended stable markers, e.g. 1.1, 1.2
15:52:50 <portdirect> are you deploying to prod straight from usptream with no gate/internal repo?
15:52:57 <mattmceuen> portdirect correct me if I'm remembering wrong
15:53:06 <portdirect> mattmceuen: correct
15:54:21 <portdirect> and those tags 1.1, 1.2 etc will correspond with a range of openstack versions
15:54:22 <jayahn> portdirect: nope. we are not deploying to prod straight from upstream.
15:54:43 <mattmceuen> anything else on this one jayahn?
15:55:09 <jayahn> i have a slight different memory on tagging policy. but it is okay for now
15:55:20 <jayahn> I will dig up my memory again to make sure..
15:55:28 <mattmceuen> sounds like a plan
15:55:29 <mattmceuen> #topic Source code copyright attribute
15:55:45 <mattmceuen> ^attribution
15:55:56 <mattmceuen> As discussed in OSH meeting a while back: http://eavesdrop.openstack.org/meetings/openstack_helm/2018/openstack_helm.2018-03-20-15.00.log.html#l-101
15:56:25 <mattmceuen> We're planning on replacing the "OpenStack-Helm Authors" copyright attribution with the "OpenStack Foundation", as the latter is a legal entity and the former isn't
15:56:45 <mattmceuen> Heads up that I'm planning to put in PS this week unless that gives anyone heartburn -- lots of small changes to many files :)
15:57:33 <mattmceuen> If so -- please bring it up in the #openstack-helm channel today
15:57:42 <mattmceuen> #topic Patchsets needing review
15:57:49 <mattmceuen> Add ceph configuration for cinder-backup - https://review.openstack.org/#/c/574146/
15:57:49 <mattmceuen> Add sla check in rally task - https://review.openstack.org/#/c/580286/
15:57:49 <mattmceuen> Add rally test gate - https://review.openstack.org/#/c/582463/
15:57:49 <mattmceuen> Nova: add live_migration_interface option - https://review.openstack.org/#/c/577349/
15:57:49 <mattmceuen> [ingress] introduce keepalived sidecar for ingress vip - https://review.openstack.org/#/c/577353/
15:58:02 <mattmceuen> Let's please get some reviews on these guys this week^^^ !
15:58:22 <mattmceuen> I'll post them in the team channel as well
15:58:34 <mattmceuen> with one minute left -- any closing thoughts guys?
15:58:52 <roman_g> Summit talks?
15:59:05 <mattmceuen> LAST DAY FOR SUBMITTING SUMMIT TALKS
15:59:09 <jayahn> ah.. right. it ends today.
15:59:10 <mattmceuen> thanks roman_g that's a good one :)
15:59:29 <roman_g> reminder to you to talk to Jared
15:59:31 <roman_g> mattmceuen:
15:59:41 <mattmceuen> Appreciate all your work and insight guys -- have a great week and see you in #openstack-helm
15:59:43 <mattmceuen> #endmeeting