16:00:19 <jungleboyj> #startmeeting Cinder
16:00:20 <openstack> Meeting started Wed Jun 26 16:00:19 2019 UTC and is due to finish in 60 minutes.  The chair is jungleboyj. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:21 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:23 <openstack> The meeting name has been set to 'cinder'
16:00:29 <whoami-rajat> Hi
16:00:42 <enriquetaso> o/
16:00:54 <jungleboyj> courtesy ping: jungleboyj whoami-rajat rajinir lseki carloss pots woojay erlon geguileo eharney rosmaita enriquetaso e0ne smcginnis davidsha walshh_ xyang hemna _hemna
16:01:03 <davidsha> o/
16:01:05 <lseki> o/
16:01:06 <geguileo> hi! o/
16:01:07 <smcginnis> o/
16:01:08 <woojay> hi
16:01:10 <e0ne> hi
16:01:10 <walshh_> hi
16:01:22 <jungleboyj> @!
16:01:22 <_pewp_> jungleboyj ヾ(-_-;)
16:02:50 <jungleboyj> Ok.  Looks like we have a decent crowd
16:02:55 <jungleboyj> Lets get started.
16:02:59 <_erlon_> Hey
16:03:06 <jungleboyj> #topic announcements
16:03:27 <jungleboyj> Don't realy have anything major here other than the usual reminder that we are heading toward MileStone2
16:03:56 <whoami-rajat> jungleboyj: spec freeze?
16:04:14 <jungleboyj> Also the friendly reminder to start thinking about the Train Mid-Cycle:
16:04:18 <jungleboyj> #link https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning
16:04:21 <jungleboyj> whoami-rajat:  Yes.
16:04:48 <jungleboyj> I still need to follow up on trying to get a discounted hotel rate.
16:05:38 <jungleboyj> I need to go hide somewhere and do spec/driver reviews.
16:06:25 <jungleboyj> That was all I had for announcements.  Anyone have anything else to mention there?
16:07:12 <jungleboyj> Ok.  Moving on then.
16:07:19 <jungleboyj> #topic replication
16:07:25 <jungleboyj> walshh_:  You here?
16:07:28 <walshh_> yes
16:07:40 <jungleboyj> The floor is yours.
16:07:50 <walshh_> just an enquiry on replication_device
16:08:09 <walshh_> right now there is a 1:1 relationship between backend and replication
16:08:36 <_erlon_> walshh_: not necessarily
16:08:38 <walshh_> just wondering is there a reason why it could not be an extra spec on the volume type instead
16:08:47 <_erlon_> it depends on how you implement it
16:09:30 <walshh_> ok, so it is possible to have multiple replication types per backend right now?
16:09:34 <_erlon_> walshh_: you choose how many replication_devices you wanna have
16:09:47 <_erlon_> walshh_: yes
16:10:11 <_erlon_> lets say you configure [b1]
16:10:34 <_erlon_> [b1]
16:10:35 <_erlon_> replication_device=repdev1
16:10:36 <_erlon_> replication_device=repdev1
16:10:40 <_erlon_> replication_device=repdev2
16:10:43 <walshh_> but if I have 2 replication types won't both be used for each volume?
16:11:18 <walshh_> I would like to be able to pick which type I use on a per volume type basis
16:11:32 <_erlon_> Im not sure exactly what are you trying to achieve
16:12:16 <_erlon_> so, each volume would be casted to a different device?
16:12:33 <walshh_> we support 3 different types of replication
16:12:47 <walshh_> synchronous, asynchronous and metro
16:13:01 <jungleboyj> So, I would think that you could set up your driver to do that based on an extra spec.
16:13:13 <_erlon_> walshh_: yes, so at volume creation you ant to choose one of them
16:13:15 <jungleboyj> Then have different volume types for each type.
16:13:21 <walshh_> correct
16:14:01 <_erlon_> jungleboyj: +1, yes, your driver needs to understand/parse that extra-spec and choose
16:14:08 <walshh_> volume_type_1 = 'async'
16:14:23 <walshh_> volume_type_2 = 'sync' etc
16:14:57 <walshh_> I don't think this is possible right now.
16:15:11 <_erlon_> yes, dont forget to report that in your backend
16:15:20 <walshh_> but please correct me if I am wrong
16:15:52 <_erlon_> it woud be something like:
16:16:33 <_erlon_> cinder type-key rep1 set rep_sync_type=async
16:16:42 <_erlon_> cinder type-key rep2 set rep_sync_type=sync
16:17:30 <_erlon_> the backend/pool must report both, rep_sync_type, so the scheduler will not filter it out
16:18:24 <walshh_> thanks Erlon, perhaps we could take this to openstack-cinder rather than me highjacking the meeting.
16:18:36 <jungleboyj> walshh_:  ++ Sounds good.
16:18:45 <jungleboyj> Sounds like just a matter of designing and coding.
16:18:46 <_erlon_> walshh_: sure, Ill be happy to help
16:18:48 <jungleboyj> :-)
16:18:57 <walshh_> great, thanks guys
16:19:00 <_erlon_> Im doing the same implementation for our driver :)
16:19:12 <walshh_> better again!
16:19:24 <jungleboyj> _erlon_:  Oh, that is good.  Have similar implementations would be good then.
16:19:41 <jungleboyj> walshh_:  Next topic is yours as well.
16:20:00 <walshh_> yes, we had offered to test the FC portion of this blueprint
16:20:03 <jungleboyj> #topic Validate Volume WWN Upon Connect
16:20:20 <jungleboyj> #link https://blueprints.launchpad.net/cinder/+spec/validate-volume-wwn-upon-connect
16:21:23 <walshh_> We can still help out with this if still required
16:21:37 <jungleboyj> Hmmm, Good question.
16:21:48 <jungleboyj> avishay worked on that a bit and then disappeared.
16:22:04 <walshh_> I'm not sure if anything was done in os-brick for FC yet
16:22:06 <jungleboyj> geguileo:  You know anything about this?
16:22:30 <geguileo> the WWN validation on connection?
16:23:00 <jungleboyj> Yes for FC.
16:23:12 <geguileo> I remember the general idea
16:23:22 <geguileo> but I don't remember the status
16:23:47 <jungleboyj> We got changes in place for iSCSI but don't think that any follow up happened for FC.
16:24:04 <geguileo> we probably didn't add the feature there  :-(
16:24:53 <walshh_> anyway, offer still holds, if required.  Just thought I would follow up
16:25:36 <jungleboyj> Ok.  Could ping Avishay and see if there was any plan to do so.
16:27:24 <jungleboyj> I can send him a note and see if we get any response.
16:27:35 <walshh_> thanks, that is all I had
16:27:42 <jungleboyj> #action jungleboyj to follow-up with Avishay.
16:27:43 <hemna> would it help to have os-brick report the wwn of the volume in the connector?
16:28:47 <jungleboyj> I don't remember the details there.
16:30:58 <jungleboyj> So, the change was to check the WWN if it was provided in the connector.
16:31:09 <jungleboyj> #link https://review.opendev.org/#/c/633676
16:32:25 <jungleboyj> Anyway.
16:32:34 <jungleboyj> #topic open discussion
16:33:00 <whoami-rajat> i guess this was reverted here https://review.opendev.org/#/c/642648/
16:33:03 <jungleboyj> Does anyone have other topics for today?
16:33:07 <whoami-rajat> jungleboyj:  ^
16:33:32 <jungleboyj> whoami-rajat:  Oh yeah.  I forgot that.
16:33:51 <jungleboyj> That would explain why there hasn't been any follow-up.
16:34:21 <hemna> yah seems this needs some more investigation/work
16:34:44 <jungleboyj> hemna:  ++
16:35:28 <hemna> ceph-iscsi update?
16:36:05 <jungleboyj> hemna:  Sure!
16:36:15 <hemna> https://review.opendev.org/#/c/667232/
16:36:20 <hemna> so I've been working on that a few days now
16:36:41 <hemna> that's the devstack-plugin-ceph patch to do all the work to install all the requirements
16:36:53 <hemna> for getting ceph-iscsi installed and configured in devstack
16:37:06 <hemna> been running into various things
16:37:32 <hemna> like, even though ubuntu's 4.15 kernel says it has the modules needed to run tcmu-runner, it doesn't work
16:37:59 <hemna> I think the ceph-iscsi team had patches against the required modules that only landed in the kernel in 4.16
16:38:06 <hemna> so I'm just ignoring that for now
16:38:20 <hemna> and assuming that we'll get a way to work in CI
16:38:35 <hemna> so in the mean time I've gotten most of the devstack plugin working
16:38:44 <hemna> but just ran into an ipv6 issue
16:39:03 <hemna> evidently the rbd-target-gw and rbd-target-api only work on systems that have ipv6 enabled.
16:39:13 <hemna> which my vm doesn't
16:39:35 <hemna> does anyone know if our CI images have ipv6 ?
16:39:45 <smcginnis> I believe so.
16:39:53 <hemna> I couldn't get neutron to start in a vm where ipv6 was enabled on the host os and guest os
16:40:08 <hemna> ran into a known bug with setting some socket permissions
16:40:25 <clarkb> you have to enable RAs
16:40:35 <hemna> RAs ?
16:41:00 <clarkb> router advertisements at least for the nodes zuul uses the default with linux if it sends RAs is to not accept RAs
16:41:16 <clarkb> and neutron sends RAs so you hvae to ask the linux kernel nicely to accept RAs as well via sysctl
16:41:34 <clarkb> (I don't remember which sysctl it is though)
16:41:37 <hemna> ok, I have no clue what any of that is
16:41:44 * hemna is not a networking guy
16:41:54 <clarkb> router advertisements are how nodes discover their IPv6 address
16:41:56 <hemna> neutron is a black whole to me
16:42:12 <clarkb> so your test VM likely needs to accept them to learn its address then separately neutron sends them for the networks it manages
16:42:26 <hemna> I worked around it by disabling ipv6
16:42:29 <clarkb> but ya ipv6 should work fine on the zuul jobs
16:42:33 <clarkb> we run many ipv6 tests
16:42:33 <hemna> as ipv6 seems like a pain anyway
16:43:00 <hemna> ok, so I'll just hope/pray that the ipv6 requirements for the ceph-iscsi daemons will work there
16:43:03 <hemna> :)
16:43:09 <hemna> we'll find out when CI runs I suppose
16:43:47 <hemna> ok, so I'll try and wrap up the devstack plugin work today or tomorrow
16:44:00 <hemna> and then figure out how to get CI firing that up to see if it works
16:44:06 <hemna> then we can CI the driver patch too
16:44:14 <hemna> that's the plan at least.
16:45:03 <hemna> jungleboyj: that's it from me.
16:45:23 <whoami-rajat> geguileo: quick ques (since i couldn't find you in #openstack-cinder), cinderlib tests run in cinder-tempest-dsvm-lvm-lio-barbican job right?
16:45:24 <jungleboyj> Sounds good.  Thank you for continuing to work on that.
16:46:15 <geguileo> whoami-rajat: sorry, I had problems with the bouncer and it didn't rejoin the cinder channel  :-(
16:47:15 <geguileo> whoami-rajat: that is correct
16:47:31 <whoami-rajat> geguileo: oh no problem
16:47:48 <geguileo> whoami-rajat: it is running on LVM and Ceph CI jobs
16:49:43 <jungleboyj> Ok.  Anything else we need to cover today?
16:50:29 <whoami-rajat> geguileo: i figured out some of the cinderlib tests failed since i constrained the volume_type_id field to NOT NULL in db, so the cinderlib  tests doesn't call c-api to create_volume but rather use cinderlib's defined method in gate jobs right?
16:50:49 <geguileo> whoami-rajat: yes, that is correct
16:51:13 <geguileo> whoami-rajat: that would require a change to cinderlib as well
16:51:29 <geguileo> whoami-rajat: what's the review?
16:51:56 <whoami-rajat> geguileo: https://review.opendev.org/#/c/639180/
16:52:17 <whoami-rajat> geguileo: ok, that's what i thought. Thanks for the help.
16:52:48 <whoami-rajat> geguileo: tests failing http://logs.openstack.org/80/639180/22/check/cinder-tempest-dsvm-lvm-lio-barbican/dcc733e/job-output.txt.gz#_2019-06-26_13_31_35_595974
16:54:00 <jungleboyj> Anything else?
16:54:03 <jungleboyj> :-)
16:54:09 <geguileo> whoami-rajat: yeah, the issue is like you said, because you've restricted it to be non-null
16:54:32 <whoami-rajat> jungleboyj: sorry for holding the meeting, that's all from my side :)
16:54:45 <jungleboyj> whoami-rajat:  No worries.  It is Open Discussion
16:54:55 <whoami-rajat> geguileo: thanks :)
16:55:14 <jungleboyj> In not one else has topics I will wrap up the meeting.
16:55:27 <jungleboyj> Thanks everyone for joining.
16:55:36 <jungleboyj> Go forth and review.
16:55:54 <jungleboyj> #endmeeting