16:00:30 <jungleboyj> #startmeeting Cinder
16:00:31 <openstack> Meeting started Wed Jul 24 16:00:30 2019 UTC and is due to finish in 60 minutes.  The chair is jungleboyj. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:32 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:34 <openstack> The meeting name has been set to 'cinder'
16:00:47 <davidsha> Hey
16:00:53 <geguileo> hi! o/
16:00:53 <jungleboyj> Courtesy ping:  jungleboyj whoami-rajat rajinir lseki carloss pots woojay erlon geguileo eharney rosmaita enriquetaso e0ne smcginnis davidsha walshh_ xyang hemna _hemna tosky
16:01:08 <whoami-rajat> Hi
16:01:08 <enriquetaso> o/
16:01:11 <eharney> hi
16:01:12 <xyang> hi
16:01:14 <e0ne> hi
16:01:34 <jungleboyj> @!
16:01:35 <_pewp_> jungleboyj (=゚ω゚)ノ
16:01:40 <pots> o/
16:01:41 <rosmaita> o/
16:01:55 <e0ne> #link https://etherpad.openstack.org/p/cinder-train-meetings
16:02:08 <walshh_> hi
16:02:34 * smcginnis is here but not really
16:02:46 <jungleboyj> smcginnis:  NOOOOO!  You must be here.  ;-)
16:03:07 * jungleboyj is lost without ShadowPTL
16:03:38 <smcginnis> hah!
16:03:43 <jungleboyj> :-)
16:03:49 * jungleboyj defers to rosmaita  Instead
16:04:25 <jungleboyj> Ok.  Quite a few things to get to today.
16:04:27 <rosmaita> that won't do you any good
16:04:32 <jungleboyj> :-p
16:04:45 <jungleboyj> So, reminder that Train Milestone 2 is this week.
16:05:01 <jungleboyj> #link https://releases.openstack.org/train/schedule.html
16:05:25 <jungleboyj> That means that I will start looking at CI runs to see if they are running Pyton3.
16:05:28 <e0ne> this week == tomorrow
16:05:47 <smcginnis> e0ne: ;)
16:05:48 <jungleboyj> e0ne:  ++
16:06:59 <jungleboyj> Any questions about those two questions?
16:07:08 <jungleboyj> Two items...
16:07:32 <jungleboyj> Take that as a no.
16:07:46 <jungleboyj> Cinder Mid-Cycle Topics ... Please take a look at the etherpad.
16:07:58 <jungleboyj> #link https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning
16:08:20 <jungleboyj> Please add topics there so that we can make sure to have a productive mid-cycle.
16:09:32 <jungleboyj> #topic Spec: Leverage compression hardware accelerator
16:09:40 <LiangFang> hi
16:10:11 <LiangFang> thanks jungleboyj and rosmaita to give +2
16:10:24 <jungleboyj> So, my comments have bee addressed.  I think we just need Eric to sign off.
16:10:24 <jungleboyj> eharney: ^^
16:10:52 <eharney> i haven't caught up with the last round of discussion between LiangFang and Brian on this one
16:11:35 <jungleboyj> Ok.  If you can take a look that would be good.  rosmaita gave a _2
16:11:38 <jungleboyj> +2
16:11:56 <rosmaita> it was software fallback if no accelerator, and a config option about whether to allow any compression or not
16:12:09 <jungleboyj> rosmaita:  Good.
16:12:18 <eharney> sounds good
16:12:47 <jungleboyj> LiangFang: Anything else to address?
16:13:10 <LiangFang> rosmaita ask me about nova impact last week
16:13:41 <LiangFang> zhengMa has implement the code, and it seems no impact
16:14:01 <rosmaita> that's good
16:14:14 <LiangFang> he have successfully created VM using container format 'compressed'
16:14:50 <rosmaita> that's surprising, but ok
16:15:05 <jungleboyj> rosmaita: Surprising ?
16:15:37 <rosmaita> yeah, nova had to know to decompress the image before trying to use it
16:16:00 <rosmaita> thought it might just fail with unsupported format or something
16:16:27 <LiangFang> when cinder download image, cinder know the image container format
16:16:39 <LiangFang> so cinder help to decompressed it
16:17:14 <LiangFang> so what nova get is a compressed volume
16:17:17 <rosmaita> ok, so it was a boot from volume VM
16:17:33 <LiangFang> yes
16:17:44 <rosmaita> we need to check what happens if you try to just boot from image the normal way with container_format == 'compressed'
16:17:57 <smcginnis> What about when Nova doesn't use a Cinder volume?
16:18:08 <rosmaita> what smcginnis said
16:18:30 <rosmaita> because you know some user will try to use this the wrong way
16:19:13 <jungleboyj> rosmaita: ++
16:19:20 <rosmaita> i am thinking nova will fail gracefully, we just want to verify that
16:19:38 <rosmaita> so to be clear, we aren't expecting you to implement boot from compressed image in nova
16:19:42 <LiangFang> oh, has not verified this yet
16:19:48 <rosmaita> just want to make sure nothing breaks badly
16:20:59 <jungleboyj> rosmaita:  Any other concerns there?
16:21:07 <rosmaita> no, that's all
16:21:24 <jungleboyj> Ok.  So, I think we just need eharney to review.
16:21:33 <rosmaita> it's not really a problem with the spec, just a courtesy check on behalf of nova
16:21:52 <LiangFang> thanks
16:22:06 <LiangFang> we will check as soon as possible
16:22:11 <jungleboyj> #topic Status and Zuul v3 porting of the remaining legacy Zuul jobs
16:22:28 <tosky> hi!
16:22:37 <jungleboyj> Hi.
16:22:56 <tosky> do you want to me to show what I found out so far, or should I add some notes to the etherpad and then we can discuss about them?
16:23:16 <smcginnis> Etherpad link?
16:23:39 <tosky> to the etherpad of the meeting, I mean, unless you prefer a separate document
16:23:49 <jungleboyj> Go ahead and share.
16:26:38 <tosky> sorry for the initial mess, I think it should be readable now on https://etherpad.openstack.org/p/cinder-train-meetings
16:26:56 <jungleboyj> Ok.  Anything that you need to highlight.
16:27:47 <tosky> first, if there is anything which I don't know and that I should consider :) like some important non-documented reasons about some design decisions from that past that I should consider
16:28:55 <tosky> second, there are a few open questions (namely, whether I can go forward with replacing the LIO jobs with its already native counterpart from cinder-tempest-plugin, and other small architectural questions)
16:29:20 <tosky> (like whether it's fine to nuke the zeromq job from all existing branches)
16:29:44 <smcginnis> I think so. As far as I understand, the ZMQ stuff is all dead.
16:29:48 <jungleboyj> Ok.  Trying to understand all this.  Didn't know about this effort.
16:29:59 <eharney> replacing the LIO jobs should be fine as long as we end up with something that runs the same configuration and tests somewhere (LIO, Barbican, and maybe one other thing that's turned on in there that i forget)
16:30:43 <tosky> eharney: and that's the point; if you check the  cinder-tempest-plugin-lvm-lio, is basically doing that already
16:30:52 <eharney> right
16:30:59 <tosky> the blacklist is a bit different and it lacks the cinderlib tests, but those are easy to fix
16:31:01 <smcginnis> jungleboyj: Infra has stated the support for those legacy jobs will be going away. Not sure on timeframe, but we need to get updated to Zuul v3 native jobs as soon as we can.
16:31:15 <jungleboyj> smcginnis:  Ah, I see.
16:31:58 <e0ne> smcginnis: thanks for this update!
16:32:04 <tosky> also, the native jobs are easier to deal with; there is no more devstack-gate in between, and in my experience modifying them is easier
16:32:18 <smcginnis> ++
16:32:28 <e0ne> tosky: +1
16:33:47 <tosky> of course there are many questions to digest but we can talk about this anytime; I'm now hanging around on #openstack-cinder too, so feel free to ping me anytime I'm connected (and/or comment on the reviews)
16:34:02 <jungleboyj> tosky:  Sounds good.
16:34:02 <smcginnis> Thanks for looking at that tosky
16:34:08 <jungleboyj> smcginnis: ++
16:34:24 <jungleboyj> tosky:  Is there anything else you need from us right now?
16:34:44 <tosky> no, I guess I have a general "go on, let's figure out the details", so that's fine, thanks!
16:34:59 <jungleboyj> tosky:  Ok great.  Thank you for working on this.
16:35:36 <jungleboyj> Ok if we move on?
16:36:44 <jungleboyj> Take that as a yes.
16:36:47 <tosky> yep
16:37:06 <jungleboyj> #topic stable branches release update
16:37:15 <jungleboyj> rosmaita: All your's.
16:37:19 <rosmaita> there was a discussion yesterday among most of the cinder stable-maint cores in #openstack-cinder about the holdup releasing cinder stable/rocky (and hence stable/queens)
16:37:25 <rosmaita> #link http://eavesdrop.openstack.org/irclogs/%23openstack-cinder/%23openstack-cinder.2019-07-23.log.html#t2019-07-23T16:50:35
16:37:31 <rosmaita> the tl;dr is we agreed to revert the patch declaring multiattach support for HPE MSA from stable/rocky
16:37:38 <rosmaita> i restored mriedem's reversion patch
16:37:45 <rosmaita> #link https://review.opendev.org/#/c/670086/
16:37:49 <rosmaita> but the commit message is kind of hyperbolic, particularly the part about no 3rd party CI running on the dothill driver
16:37:54 <rosmaita> (there is CI, it's just run on subclasses of the driver)
16:38:02 <rosmaita> i don't know how much the commit message matters, TBH
16:38:08 <rosmaita> but i do know that we (the cinder team) tend to be a bit un-conservative with respect to what we consider to be appropriate driver backports
16:38:15 <rosmaita> and in fact, we're not rejecting the backport because it could be considered to be adding a feature
16:38:22 <rosmaita> but because further testing has indicated that multiattach isn't working for that driver
16:38:29 <rosmaita> like i said, i don't know if anyone pays attention to commit messages
16:38:36 <rosmaita> but it might give us more flexibility in the future if it's more accurate in this case
16:38:55 <rosmaita> (this is where i stop to see if anyone has a comment)
16:39:15 <e0ne> I think that drivers code should follow the same policy as the rest of cinder: no features backports
16:39:36 <e0ne> with some exceptions when we need to introduce some config option to fix some bug :(
16:40:46 <pots> i think the comment about the 3rd party CI was referring to whether the specific multiattach tests were being run correctly, which was a fair criticism.
16:41:16 <rosmaita> ok, maybe i am being too sensitive
16:41:37 <jungleboyj> pots:  Right.
16:41:52 <e0ne> afaik, we don't have a good 3rd-party CI coverage for stable branches
16:42:04 <e0ne> it would be great if I'm wrong here
16:42:04 <jungleboyj> rosmaita:  I think we would find most people haven't added the multi-attach tests.
16:42:10 <jungleboyj> Probably something we should check into.
16:42:23 <rosmaita> they don't seem to be running on a lot of drivers
16:42:25 <jungleboyj> e0ne: There is also that.
16:42:37 <jungleboyj> We don't have that as a requirement though.
16:42:49 <rosmaita> ok, i withdraw my comment about the commit message
16:42:55 <jungleboyj> :-)
16:43:11 <rosmaita> but i will need a stable core to +2 the revert so i can update the release patch
16:43:26 <jungleboyj> rosmaita:  Patch?
16:43:35 <rosmaita> #link https://review.opendev.org/#/c/670086/
16:44:21 <rosmaita> ok, that's all from me
16:44:44 <jungleboyj> Ok, yeah, the commit message isn't really accurate anymore here.  I will update that and then we can get that patch in.
16:44:55 <jungleboyj> rosmaita: Make sense?
16:45:16 <smcginnis> Makes sense to me.
16:45:25 <rosmaita> ok
16:45:28 <jungleboyj> Okie dokie.  Will do that after the meeting.
16:45:51 <jungleboyj> Ok.  So now we can move on.
16:46:14 <jungleboyj> #link https://review.opendev.org/#/c/523659/21
16:46:29 <jungleboyj> A few comments have been addressed by the Infortrend driver.
16:46:31 * e0ne is waiting for geguileo's review
16:46:54 <geguileo> e0ne: on which patch?
16:47:06 <jungleboyj> I think it is ok to me, the reamaining issue, if there is one, could be fixed with a follow-up.
16:47:14 <e0ne> geguileo: actually, you did it already for my spec. thanks!
16:47:30 <geguileo> e0ne: yeah, I just did that review  XD
16:47:57 <jungleboyj> We have discussion around the Seagate driver earlier this week:
16:48:08 <jungleboyj> #link https://review.opendev.org/#/c/671195/
16:48:33 <jungleboyj> I think that we can let this slip a bit as it is a rename and pots has other patches to backport first.
16:48:42 <jungleboyj> Doesn't anyone have an issue with that?
16:48:50 <smcginnis> Yeah, I don't see that as a new driver.
16:49:14 <jungleboyj> smcginnis:  Ok.  Good you agree there given I am kind-of close to that one.  ;-)
16:49:45 <jungleboyj> I see that the MacroSan driver was added into the list.
16:49:48 <smcginnis> It really is just a rebrand. DotHill is gone, it is now Seagate. It makes sense to get that updated to show that.
16:49:56 <whoami-rajat> jungleboyj: yep, i added it
16:49:58 <jungleboyj> Cool.
16:50:06 <jungleboyj> whoami-rajat:  So, it has a -2 from eharney
16:50:32 <whoami-rajat> jungleboyj: the maintainer keeps querying about reviews almost everyday and updates the patch regularly
16:50:40 <whoami-rajat> jungleboyj: it's an old -2
16:51:00 <smcginnis> I haven't looked. Have they gone through the new driver checklist and addressed everything? Is there CI reporting now?
16:51:11 <jungleboyj> Ok.  I guess I had missed those pings.
16:51:12 <smcginnis> Last I looked it was quite a way off.
16:51:40 <jungleboyj> Yeah, I don't see a CI reporting.
16:51:49 <whoami-rajat> smcginnis: they've turned off the CI reporting quite for a while now, but it has been reporting on other patches, need to report on this one too.
16:52:21 <whoami-rajat> smcginnis: seemingly, the driver checklist has been addressed (as i checked last, maybe i missed something)
16:52:25 <smcginnis> If it's the deadline and there hasn't been CI reporting on the new driver and other patches for at least several days, that's concerning.
16:53:11 <jungleboyj> smcginnis:  ++
16:53:28 <whoami-rajat> smcginnis: yea it is
16:54:49 <jungleboyj> So, I am concerned about trying to get that one through.
16:55:24 <smcginnis> Folks should review it and put specific issues/concerns on the review so they know what and why.
16:55:33 <jungleboyj> I also haven't followed that one.  .... So, I defer to those who have looked at it/followed it.
16:56:44 <jungleboyj> Let me take a look at that driver and we can follow up in the channel after the meeting.
16:57:06 <jungleboyj> eharney:  Says he fixed his volume rekey spec.  I think I am good with getting that in.
16:57:53 <jungleboyj> #topic Final run-through of open patches for milestone-2
16:58:09 <jungleboyj> e0ne: geguileo  Has comments on your spec.
16:58:21 <jungleboyj> #link https://review.opendev.org/#/c/556529/
16:58:57 <jungleboyj> Can we get eyes on the encryption one and see if we can merge that:  https://review.opendev.org/#/c/608663/
16:59:38 <jungleboyj> Not sure if there is more discussion on those specs.
16:59:49 <jungleboyj> Just please review and respond to comments please.
17:00:10 <jungleboyj> Ok.  That is all our time for today.
17:00:27 <smcginnis> \o
17:00:30 <jungleboyj> Thanks for joining the meeting.
17:00:35 <whoami-rajat> Thanks!
17:00:41 <jungleboyj> Talk to you all next week.
17:00:42 <geguileo> thanks!
17:00:48 <jungleboyj> #endmeeting