14:00:00 <rosmaita> #startmeeting cinder
14:00:01 <openstack> Meeting started Wed Dec 11 14:00:00 2019 UTC and is due to finish in 60 minutes.  The chair is rosmaita. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:02 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:05 <openstack> The meeting name has been set to 'cinder'
14:00:07 <smcginnis> o/
14:00:09 <LiangFang> o/
14:00:10 <rosmaita> #topic roll call
14:00:19 <rosmaita> \o
14:00:43 <jungleboyj> o/
14:00:45 <tosky> o/
14:01:59 <rosmaita> guess that's everyone
14:02:24 <rosmaita> just out of curiosity, can everyone state their UTC offset?
14:02:26 <rosmaita> -5
14:02:39 <jungleboyj> -6
14:02:53 <LiangFang> +8
14:02:54 <smcginnis> -6
14:03:02 <whoami-rajat> +5:30
14:03:17 <jungleboyj> smcginnis:  and I agree.  That is a good sign.  :-)
14:03:29 <rosmaita> :)
14:03:30 <tosky> +1
14:03:34 <smcginnis> ;)
14:03:49 <rosmaita> geguileo: i think you are +1 ?
14:04:15 <geguileo> +1 to what?
14:04:16 <tosky> he is (unless he is travelling) (even though he is physically on +0, but that's a long story)
14:04:25 <geguileo> rofl
14:04:33 <geguileo> oh TZ
14:04:52 <geguileo> yup +1 now
14:04:56 <rosmaita> i will have to get that story some time
14:04:59 <rosmaita> ok, thanks
14:05:04 <rosmaita> #topic updates
14:05:39 <rosmaita> this week is milestone-1, but since we don't have to release any more, it's not that big a deal
14:05:58 <rosmaita> but, it's time to do our every-2-month release from stable branches
14:06:15 <rosmaita> #link https://etherpad.openstack.org/p/cinder-releases-tracking
14:06:29 <rosmaita> so, basically, cinder and os-brick could use releases
14:06:53 <rosmaita> but, there is a pep8 failure happening in brick backports
14:07:05 <rosmaita> you can see an example here: https://review.opendev.org/#/c/697116/1
14:07:38 <rosmaita> it's over W503 and W504
14:07:56 <rosmaita> which have to do with whether an operator should end or begin a continued line
14:08:06 <rosmaita> pep8 has gone back and forth on that
14:08:20 <rosmaita> and master and train for us ignore them both
14:08:45 <rosmaita> anyway, i think we should backport the ignore to the stable branches that don't have it
14:09:05 <rosmaita> because i'd rather not have to edit cherry picks for something silly like this
14:09:13 <rosmaita> save that for serious conflicts
14:09:32 <rosmaita> so i have proposed
14:09:35 <rosmaita> #link https://review.opendev.org/#/c/698471/
14:09:48 <rosmaita> if everyone is cool with that plan, what i'd like to do is
14:09:59 <rosmaita> get the pep8 ignore change backported
14:10:19 <rosmaita> then complete the backport of the actual fix, which is "iscsi: Add _get_device_link retry when waiting for /dev/disk/by-id/ to populate"
14:10:28 <rosmaita> and then do point releases of os-brick
14:10:39 <smcginnis> Obviously the right answer is it should be end of line, but best to ignore. :D
14:11:17 <rosmaita> maybe it's because of my eyeglass prescription, but i don't see that well all the way out to column 79
14:11:24 <rosmaita> so i prefer it at the front!
14:11:36 <rosmaita> but yeah, this is a contentious issue
14:11:54 <rosmaita> but i guess the key point here is we should do it the same from master on back
14:12:01 <rosmaita> until we take a definitive stand
14:13:02 <rosmaita> ok, so if everyone is OK with that plan, please keep an eye out for the patches so we can get it merged
14:13:26 <rosmaita> i have not looked at cinder, python-cinderclient to see what they have for this
14:13:34 <rosmaita> probably should
14:13:50 <rosmaita> #action rosmaita check pep8 ignores in all cinder projects
14:13:58 <rosmaita> ok
14:14:09 <rosmaita> one more thing: cinderlib is ready for its train release
14:14:15 <rosmaita> thanks to geguileo
14:14:20 <rosmaita> this will be 1.0.0 !
14:14:35 <geguileo> awesome!! XD
14:14:52 <rosmaita> and i think since we remove py2 support in ussuri, the first ussuri release will be 2.0.0
14:14:58 <rosmaita> that project is moving right along!
14:15:13 <Roamer`> yeah, sort of off-topic (sowwy), but I'd like to thank geguileo for his work on cinderlib - it already exposed two (very minor) bugs in the StorPool driver
14:15:39 <geguileo> Roamer`: nice!!
14:15:44 <rosmaita> Roamer`: thanks to geguileo are always on topic
14:15:50 <geguileo> lol
14:16:33 <rosmaita> that's good news though, glad to see that it's helping keep the cinder drivers stable
14:16:48 <rosmaita> ok, the final thing: branch closure situation
14:17:05 <rosmaita> #link http://lists.openstack.org/pipermail/openstack-discuss/2019-December/011378.html
14:17:29 <rosmaita> as we discussed last week, i sent a proposal to the ML to EOL the driverfixes branches, but keep o and p in EM
14:17:50 <rosmaita> from the absence of responses, it sounds like that's acceptable to the community
14:18:11 <rosmaita> so, i'll revise my EOL patch and resubmit it early next week
14:18:25 <rosmaita> (give the release team time to handle all the M-1 releases this week)
14:18:42 <rosmaita> so if anyone wants to save driverfixes/newton, this is your last chance
14:19:27 <rosmaita> i guess the only other announcement is a reminder to everyone to review open cinder-specs
14:20:26 <rosmaita> #topic the FCZM situation
14:20:59 <rosmaita> #link https://review.opendev.org/#/c/696857/
14:21:21 <rosmaita> ok, so the situation is that we currently have 2 zonemanagers for  fibre channel
14:21:38 <rosmaita> brocade has announced EOL for its driver
14:21:46 <rosmaita> last supported release for them is Train
14:22:08 <rosmaita> unfortunately, this got announced the week before train release
14:22:20 <rosmaita> so we did not deprecate the driver in Train
14:22:32 <smcginnis> They have also reached out directly and asked us to clean things up since they are not able to.
14:23:00 <rosmaita> yes, they are dropping and running
14:23:11 <geguileo> rosmaita: lol
14:23:19 <rosmaita> so the first issue is:
14:23:54 <rosmaita> can we consider their EOL announcement to customers as a deprecation in Train, so we could remove the driver now in ussuri?
14:24:12 <rosmaita> my reason for this is that we don't know if it will run under py3
14:24:23 <rosmaita> (the py3 compat seems to have been the final straw for them)
14:25:19 <geguileo> I would hate it for us to have only 1 FCZM driver
14:25:27 <geguileo> even if we mark it as unsupported
14:25:51 <rosmaita> hemna feels that way, too
14:26:00 <jungleboyj> geguileo: ++
14:26:25 <geguileo> would the community be willing to get leave it as unsupported if I can confirm it works for Py3?
14:26:41 <smcginnis> I don't like it, but if the vendor isn't going to support it, then we can only provide the one that does.
14:26:43 <geguileo> Or if I can fix it and make it work for py3?
14:27:16 <rosmaita> i am not against that, at least not short-term
14:27:36 <rosmaita> if we confirm that it works on py3, i will withdraw my proposal to remove it it ussuri
14:27:44 <smcginnis> I would prefer if we keep it unsupported in Ussuri, but I don't think it should fall on the community to support it and keep it around past that.
14:27:59 <rosmaita> yeah, we don't want to set a bad precedent
14:28:12 <smcginnis> So even if we fix any py3 issues, I think it's safer if we clearly announce it is going away in V.
14:28:40 <geguileo> I think not having that as a FCZM will be a big problem for OpenStack customers
14:29:12 <rosmaita> well, we can state "subject to removal in V as per the openstack deprecation policy"
14:29:25 <rosmaita> that gives us some room to keep it around longer if we decide it's a good idea
14:29:25 <geguileo> rosmaita: and what are customers going to do?
14:29:26 <jungleboyj> geguileo: ++
14:29:38 <geguileo> rosmaita: ok
14:29:52 <jungleboyj> Too bad hemna isn't here.  He thought maybe we could help leverage HPE.
14:30:07 <rosmaita> geguileo: i really don't know -- i would hope that customers would lean on brocade to keep it around
14:30:09 <geguileo> I will look for a cheap brocade FC switch I can use to test/fix the driver at home
14:30:12 <smcginnis> Well, if Brocade isn't going to support it, I think that's a pretty strong statement.
14:30:36 <smcginnis> Customers can complain to them and get them to bring it back if it's that badly needed.
14:30:43 <geguileo> smcginnis: it is, but it will have a big effect on the perception of Cinder/OpenStack on customers
14:31:06 <rosmaita> i think that's already happened, though, by their announcement
14:31:16 <smcginnis> Only with FC. I don't think that's too big of a perception issue. :D
14:31:21 <smcginnis> rosmaita: ++
14:31:21 <geguileo> as a customer I would complain, and if that gets me nowhere I may decide to go to other platform... :-(
14:31:34 <rosmaita> right
14:32:01 <geguileo> ok, let's go with rosmaita's suggestion and see if I can get this fixed tested
14:32:05 <rosmaita> ok, well the key issue i wanted to sort out today was our community attitude
14:32:11 <geguileo> and I can bring the topic back once I do that
14:32:11 <rosmaita> so sounds like:
14:32:34 <rosmaita> we deprecate it now "subject to removal in V as per policy"
14:32:53 <rosmaita> (2) geguileo figures out the py3 situation
14:33:13 <rosmaita> and then we discuss again
14:33:24 <geguileo> sounds good to me
14:33:41 <jungleboyj> Great.
14:33:46 <rosmaita> ok, thanks geguileo -- you are addressing my major concern
14:34:07 <jungleboyj> geguileo: To the rescue again.
14:34:19 <rosmaita> in the mean time, i don't know if there's anything we can do to encourage more FC participation?
14:34:50 <geguileo> rosmaita: maybe the effort we discussed in the PTG
14:34:57 <geguileo> of making it easy to have a CI
14:35:03 <geguileo> that would lessen the burden
14:35:10 <rosmaita> true
14:35:23 <rosmaita> ok, that gives us a reason to keep that as a priority
14:36:00 <rosmaita> it's weird -- everyone wants excellent performance, but they also want to use commodity hardware
14:36:19 <rosmaita> but that's off topic
14:36:22 <geguileo> I believe we discussed the software factory thingy that the RDO folks have to do the 3rd party CI
14:36:30 <geguileo> and I think tosky started documenting some of it
14:37:42 <rosmaita> i'll check in with tosky and see what i can do to help
14:38:02 <jungleboyj> It would be so nice if someone could make 3rd Party CI easy.
14:38:16 <rosmaita> amen, you are singing to the choir
14:38:20 <tosky> I collected some information; the softwarefactory people are going to release a new version these days, with more documentation
14:38:30 <rosmaita> great
14:38:33 <tosky> the idea is that following their documentation should be enough to get started
14:38:54 <geguileo> tosky: that's awesome!!
14:39:16 <rosmaita> and we can always submit clarifying patches
14:39:22 <rosmaita> that's really good news
14:39:35 <rosmaita> #topic support volume local cache
14:39:46 <rosmaita> LiangFang: that's you
14:40:07 <LiangFang> I work for the poc code change
14:40:18 <LiangFang> and the poc is almost finished
14:40:30 <LiangFang> will push to review in coming days
14:40:44 <LiangFang> poc code is almost finished
14:40:52 <LiangFang> one question is
14:41:08 <LiangFang> the cache software will have some configuration
14:41:11 <LiangFang> e.g. cache id
14:41:35 <LiangFang> normally if we use one fast ssd as cache, the the cache id will be 1
14:41:52 <LiangFang> the use two ssd, it may be 2
14:42:37 <LiangFang> then if we use this ssd to cache for a remote volume, we need to specify the cache id (means the fast ssd)
14:43:03 <LiangFang> but we can specify the cache id as a big number, e.g. 9999
14:43:37 <LiangFang> choice1: the cache id can be set in nova-cpu.conf
14:44:29 <LiangFang> choice2, the cache id can be hard coded in os-brick, but the number should be large enough, e.g. 9999, so this number will not be used by others
14:45:06 <LiangFang> choice 2 is easy for coding
14:45:35 <LiangFang> choice 1 need the configuration change in nova
14:45:52 <LiangFang> I prefer choice 1.
14:46:10 <geguileo> LiangFang: I am not following sorry
14:46:17 <geguileo> LiangFang: where do we use this cache ID?
14:46:32 <geguileo> Is this ID something that matches a hardware ID of the cache?
14:46:41 <rosmaita> is the issue that there may be multiple caches available, and we need to know which one to use?
14:46:42 <geguileo> what if there are multiple caches?
14:46:47 <LiangFang> when caching a remote volume, should specify which cache to use
14:46:57 <LiangFang> rosmaita: yes
14:47:31 <LiangFang> normally one cache in system, but it can be multiple caches
14:47:34 <geguileo> what if we want to use multiple caches?
14:47:51 <LiangFang> I just need one cache
14:48:15 <geguileo> but what if you have 2 and you want to use both?
14:48:33 <geguileo> we don't need to solve it now, but the solution should be able to support it in the future
14:48:38 <geguileo> without having to change everything
14:48:46 <LiangFang> I plan to use only one
14:49:10 <LiangFang> the cache id is increase one by one by default if not specified
14:49:11 <geguileo> but someone else may want to use 2, right?
14:49:25 <geguileo> LiangFang: is increased by the Linux system?
14:49:38 <LiangFang> by the opencas software
14:50:14 <geguileo> and that ID may change when you reboot the system?
14:50:22 <LiangFang> it can add more than one cache. the first cache's id if 1 by default, the second id would be 2 by default
14:50:35 <geguileo> what happens if you reboot?
14:50:44 <geguileo> can they have different IDs then?
14:50:52 <LiangFang> it can be configured in opencas configuration file, or use the opencas admin tool
14:51:16 <geguileo> don't caches have a unique ID somewhere?
14:51:22 <LiangFang> I mean, os-brick don't know which cache to use
14:51:43 <LiangFang> one way is let nova to tell os-brick
14:52:04 <geguileo> yes, but if nova tells os-brick use 1
14:52:07 <LiangFang> another way is os-brick just use a hard coded cache id, e.g. 9999
14:52:11 <geguileo> and the system have 1 and 2
14:52:14 <geguileo> then we reboot
14:52:26 <geguileo> and now the cache that was 1 before is now using id 2
14:52:34 <geguileo> then we won't be using the cache we want
14:53:10 <LiangFang> the cache id will not change after reboot
14:53:18 <geguileo> oh, ok
14:53:21 <geguileo> then we are good
14:53:30 <LiangFang> it is configured in opencas configuration file
14:53:38 <geguileo> I prefer the nova conf, same as you
14:53:54 <LiangFang> thing is: how os-brick know which cache to use
14:54:02 <LiangFang> one is : nova tell it
14:54:17 <LiangFang> another is hard coded in os-brick
14:54:41 <geguileo> the nova approach seems like the right one
14:54:44 <LiangFang> second way is easy, but need to document: cache id is 9999
14:55:10 <jungleboyj> geguileo: ++
14:55:11 <rosmaita> ok, that asnwers my question ... the operator would have to configure a cache with id 9999
14:55:31 <jungleboyj> Hardcoded values always seem dangerous to me.
14:55:37 <LiangFang> rosmaita: yes, if 9999 is hard coded in os-brick
14:55:46 <LiangFang> ok...
14:56:04 <rosmaita> so with choice 1, you would define a dedicated cache id for brick to use in nova.conf ?
14:56:09 <LiangFang> so nova configuration file need to be changed
14:56:25 <LiangFang> rosmaita: yes
14:56:43 <rosmaita> ok
14:56:50 <LiangFang> but 9999 would be safe for opencas forever
14:56:50 <geguileo> I think that's the best approach
14:56:59 <LiangFang> ok
14:57:12 <geguileo> I don't like the hardcoding magical numbers in our code
14:57:19 <rosmaita> probably everyone will use 9999, but making this configurable is better
14:57:32 <LiangFang> ok
14:57:51 <LiangFang> so I will change my poc code
14:57:51 <rosmaita> ok, we are down to 2.5 minutes
14:58:04 <rosmaita> LiangFang: did you get all the answers you need?
14:58:12 <rosmaita> (for now)
14:58:32 <LiangFang> I'm write a white paper about this with our telecom customer these 3 weeks, will let you know when we finished
14:58:33 <geguileo> LiangFang: thanks
14:58:44 <geguileo> LiangFang: awesome!!
14:58:47 <LiangFang> rosmaita: yes, thanks
14:59:02 <rosmaita> yes, it will be helpful to see how all this works for real
14:59:09 <rosmaita> ok, 1 minute for
14:59:12 <rosmaita> #topic open discussion
14:59:47 <rosmaita> ok, well thanks for a productive meeting everyone!
14:59:56 <jungleboyj> Thanks rosmaita !
15:00:00 <rosmaita> see you next week
15:00:06 <rosmaita> don't forget to review cinder-specs!
15:00:07 <LiangFang> see you, thanks
15:00:12 <rosmaita> #endmeeting