16:00:19 #startmeeting Cinder 16:00:20 Meeting started Wed Jun 26 16:00:19 2019 UTC and is due to finish in 60 minutes. The chair is jungleboyj. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:21 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:23 The meeting name has been set to 'cinder' 16:00:29 Hi 16:00:42 o/ 16:00:54 courtesy ping: jungleboyj whoami-rajat rajinir lseki carloss pots woojay erlon geguileo eharney rosmaita enriquetaso e0ne smcginnis davidsha walshh_ xyang hemna _hemna 16:01:03 o/ 16:01:05 o/ 16:01:06 hi! o/ 16:01:07 o/ 16:01:08 hi 16:01:10 hi 16:01:10 hi 16:01:22 @! 16:01:22 <_pewp_> jungleboyj ヾ(-_-;) 16:02:50 Ok. Looks like we have a decent crowd 16:02:55 Lets get started. 16:02:59 <_erlon_> Hey 16:03:06 #topic announcements 16:03:27 Don't realy have anything major here other than the usual reminder that we are heading toward MileStone2 16:03:56 jungleboyj: spec freeze? 16:04:14 Also the friendly reminder to start thinking about the Train Mid-Cycle: 16:04:18 #link https://etherpad.openstack.org/p/cinder-train-mid-cycle-planning 16:04:21 whoami-rajat: Yes. 16:04:48 I still need to follow up on trying to get a discounted hotel rate. 16:05:38 I need to go hide somewhere and do spec/driver reviews. 16:06:25 That was all I had for announcements. Anyone have anything else to mention there? 16:07:12 Ok. Moving on then. 16:07:19 #topic replication 16:07:25 walshh_: You here? 16:07:28 yes 16:07:40 The floor is yours. 16:07:50 just an enquiry on replication_device 16:08:09 right now there is a 1:1 relationship between backend and replication 16:08:36 <_erlon_> walshh_: not necessarily 16:08:38 just wondering is there a reason why it could not be an extra spec on the volume type instead 16:08:47 <_erlon_> it depends on how you implement it 16:09:30 ok, so it is possible to have multiple replication types per backend right now? 16:09:34 <_erlon_> walshh_: you choose how many replication_devices you wanna have 16:09:47 <_erlon_> walshh_: yes 16:10:11 <_erlon_> lets say you configure [b1] 16:10:34 <_erlon_> [b1] 16:10:35 <_erlon_> replication_device=repdev1 16:10:36 <_erlon_> replication_device=repdev1 16:10:40 <_erlon_> replication_device=repdev2 16:10:43 but if I have 2 replication types won't both be used for each volume? 16:11:18 I would like to be able to pick which type I use on a per volume type basis 16:11:32 <_erlon_> Im not sure exactly what are you trying to achieve 16:12:16 <_erlon_> so, each volume would be casted to a different device? 16:12:33 we support 3 different types of replication 16:12:47 synchronous, asynchronous and metro 16:13:01 So, I would think that you could set up your driver to do that based on an extra spec. 16:13:13 <_erlon_> walshh_: yes, so at volume creation you ant to choose one of them 16:13:15 Then have different volume types for each type. 16:13:21 correct 16:14:01 <_erlon_> jungleboyj: +1, yes, your driver needs to understand/parse that extra-spec and choose 16:14:08 volume_type_1 = 'async' 16:14:23 volume_type_2 = 'sync' etc 16:14:57 I don't think this is possible right now. 16:15:11 <_erlon_> yes, dont forget to report that in your backend 16:15:20 but please correct me if I am wrong 16:15:52 <_erlon_> it woud be something like: 16:16:33 <_erlon_> cinder type-key rep1 set rep_sync_type=async 16:16:42 <_erlon_> cinder type-key rep2 set rep_sync_type=sync 16:17:30 <_erlon_> the backend/pool must report both, rep_sync_type, so the scheduler will not filter it out 16:18:24 thanks Erlon, perhaps we could take this to openstack-cinder rather than me highjacking the meeting. 16:18:36 walshh_: ++ Sounds good. 16:18:45 Sounds like just a matter of designing and coding. 16:18:46 <_erlon_> walshh_: sure, Ill be happy to help 16:18:48 :-) 16:18:57 great, thanks guys 16:19:00 <_erlon_> Im doing the same implementation for our driver :) 16:19:12 better again! 16:19:24 _erlon_: Oh, that is good. Have similar implementations would be good then. 16:19:41 walshh_: Next topic is yours as well. 16:20:00 yes, we had offered to test the FC portion of this blueprint 16:20:03 #topic Validate Volume WWN Upon Connect 16:20:20 #link https://blueprints.launchpad.net/cinder/+spec/validate-volume-wwn-upon-connect 16:21:23 We can still help out with this if still required 16:21:37 Hmmm, Good question. 16:21:48 avishay worked on that a bit and then disappeared. 16:22:04 I'm not sure if anything was done in os-brick for FC yet 16:22:06 geguileo: You know anything about this? 16:22:30 the WWN validation on connection? 16:23:00 Yes for FC. 16:23:12 I remember the general idea 16:23:22 but I don't remember the status 16:23:47 We got changes in place for iSCSI but don't think that any follow up happened for FC. 16:24:04 we probably didn't add the feature there :-( 16:24:53 anyway, offer still holds, if required. Just thought I would follow up 16:25:36 Ok. Could ping Avishay and see if there was any plan to do so. 16:27:24 I can send him a note and see if we get any response. 16:27:35 thanks, that is all I had 16:27:42 #action jungleboyj to follow-up with Avishay. 16:27:43 would it help to have os-brick report the wwn of the volume in the connector? 16:28:47 I don't remember the details there. 16:30:58 So, the change was to check the WWN if it was provided in the connector. 16:31:09 #link https://review.opendev.org/#/c/633676 16:32:25 Anyway. 16:32:34 #topic open discussion 16:33:00 i guess this was reverted here https://review.opendev.org/#/c/642648/ 16:33:03 Does anyone have other topics for today? 16:33:07 jungleboyj: ^ 16:33:32 whoami-rajat: Oh yeah. I forgot that. 16:33:51 That would explain why there hasn't been any follow-up. 16:34:21 yah seems this needs some more investigation/work 16:34:44 hemna: ++ 16:35:28 ceph-iscsi update? 16:36:05 hemna: Sure! 16:36:15 https://review.opendev.org/#/c/667232/ 16:36:20 so I've been working on that a few days now 16:36:41 that's the devstack-plugin-ceph patch to do all the work to install all the requirements 16:36:53 for getting ceph-iscsi installed and configured in devstack 16:37:06 been running into various things 16:37:32 like, even though ubuntu's 4.15 kernel says it has the modules needed to run tcmu-runner, it doesn't work 16:37:59 I think the ceph-iscsi team had patches against the required modules that only landed in the kernel in 4.16 16:38:06 so I'm just ignoring that for now 16:38:20 and assuming that we'll get a way to work in CI 16:38:35 so in the mean time I've gotten most of the devstack plugin working 16:38:44 but just ran into an ipv6 issue 16:39:03 evidently the rbd-target-gw and rbd-target-api only work on systems that have ipv6 enabled. 16:39:13 which my vm doesn't 16:39:35 does anyone know if our CI images have ipv6 ? 16:39:45 I believe so. 16:39:53 I couldn't get neutron to start in a vm where ipv6 was enabled on the host os and guest os 16:40:08 ran into a known bug with setting some socket permissions 16:40:25 you have to enable RAs 16:40:35 RAs ? 16:41:00 router advertisements at least for the nodes zuul uses the default with linux if it sends RAs is to not accept RAs 16:41:16 and neutron sends RAs so you hvae to ask the linux kernel nicely to accept RAs as well via sysctl 16:41:34 (I don't remember which sysctl it is though) 16:41:37 ok, I have no clue what any of that is 16:41:44 * hemna is not a networking guy 16:41:54 router advertisements are how nodes discover their IPv6 address 16:41:56 neutron is a black whole to me 16:42:12 so your test VM likely needs to accept them to learn its address then separately neutron sends them for the networks it manages 16:42:26 I worked around it by disabling ipv6 16:42:29 but ya ipv6 should work fine on the zuul jobs 16:42:33 we run many ipv6 tests 16:42:33 as ipv6 seems like a pain anyway 16:43:00 ok, so I'll just hope/pray that the ipv6 requirements for the ceph-iscsi daemons will work there 16:43:03 :) 16:43:09 we'll find out when CI runs I suppose 16:43:47 ok, so I'll try and wrap up the devstack plugin work today or tomorrow 16:44:00 and then figure out how to get CI firing that up to see if it works 16:44:06 then we can CI the driver patch too 16:44:14 that's the plan at least. 16:45:03 jungleboyj: that's it from me. 16:45:23 geguileo: quick ques (since i couldn't find you in #openstack-cinder), cinderlib tests run in cinder-tempest-dsvm-lvm-lio-barbican job right? 16:45:24 Sounds good. Thank you for continuing to work on that. 16:46:15 whoami-rajat: sorry, I had problems with the bouncer and it didn't rejoin the cinder channel :-( 16:47:15 whoami-rajat: that is correct 16:47:31 geguileo: oh no problem 16:47:48 whoami-rajat: it is running on LVM and Ceph CI jobs 16:49:43 Ok. Anything else we need to cover today? 16:50:29 geguileo: i figured out some of the cinderlib tests failed since i constrained the volume_type_id field to NOT NULL in db, so the cinderlib tests doesn't call c-api to create_volume but rather use cinderlib's defined method in gate jobs right? 16:50:49 whoami-rajat: yes, that is correct 16:51:13 whoami-rajat: that would require a change to cinderlib as well 16:51:29 whoami-rajat: what's the review? 16:51:56 geguileo: https://review.opendev.org/#/c/639180/ 16:52:17 geguileo: ok, that's what i thought. Thanks for the help. 16:52:48 geguileo: tests failing http://logs.openstack.org/80/639180/22/check/cinder-tempest-dsvm-lvm-lio-barbican/dcc733e/job-output.txt.gz#_2019-06-26_13_31_35_595974 16:54:00 Anything else? 16:54:03 :-) 16:54:09 whoami-rajat: yeah, the issue is like you said, because you've restricted it to be non-null 16:54:32 jungleboyj: sorry for holding the meeting, that's all from my side :) 16:54:45 whoami-rajat: No worries. It is Open Discussion 16:54:55 geguileo: thanks :) 16:55:14 In not one else has topics I will wrap up the meeting. 16:55:27 Thanks everyone for joining. 16:55:36 Go forth and review. 16:55:54 #endmeeting