Wednesday, 2020-02-05

*** dviroel has quit IRC00:10
*** k_mouza has quit IRC00:46
*** slaweq_ has joined #openstack-meeting-401:11
*** Liang__ has joined #openstack-meeting-401:12
*** michael-beaver has quit IRC01:13
*** slaweq_ has quit IRC01:16
*** dwalt has quit IRC01:27
*** vesper has joined #openstack-meeting-401:38
*** vesper11 has quit IRC01:39
*** igordc has joined #openstack-meeting-401:52
*** Liang__ has quit IRC02:04
*** vishalmanchanda has joined #openstack-meeting-402:10
*** roman_g has quit IRC02:33
*** k_mouza has joined #openstack-meeting-402:46
*** senrique__ has quit IRC02:46
*** enriquetaso has joined #openstack-meeting-402:47
*** k_mouza has quit IRC02:51
*** enriquetaso has quit IRC02:52
*** slaweq_ has joined #openstack-meeting-403:11
*** slaweq_ has quit IRC03:16
*** links has joined #openstack-meeting-404:43
*** bnemec has joined #openstack-meeting-404:53
*** hongbin has joined #openstack-meeting-404:55
*** igordc has quit IRC04:59
*** slaweq_ has joined #openstack-meeting-405:11
*** slaweq_ has quit IRC05:15
*** hongbin has quit IRC05:28
*** evrardjp has quit IRC05:33
*** evrardjp has joined #openstack-meeting-405:34
*** Liang__ has joined #openstack-meeting-406:05
*** Liang__ has quit IRC06:16
*** roman_g has joined #openstack-meeting-407:00
*** slaweq_ has joined #openstack-meeting-407:11
*** slaweq_ has quit IRC07:16
*** slaweq_ has joined #openstack-meeting-408:00
*** slaweq__ has joined #openstack-meeting-408:09
*** slaweq_ has quit IRC08:10
*** slaweq has joined #openstack-meeting-408:14
*** k_mouza has joined #openstack-meeting-408:15
*** slaweq__ has quit IRC08:16
*** k_mouza has quit IRC08:20
*** ralonsoh has joined #openstack-meeting-408:42
*** slaweq_ has joined #openstack-meeting-408:48
*** slaweq has quit IRC08:48
*** slaweq__ has joined #openstack-meeting-409:06
*** slaweq_ has quit IRC09:07
*** k_mouza has joined #openstack-meeting-409:19
*** slaweq__ is now known as slaweq09:25
*** gcheresh has joined #openstack-meeting-409:27
*** gcheresh_ has joined #openstack-meeting-409:38
*** gcheresh has quit IRC09:39
*** k_mouza has quit IRC09:59
*** k_mouza has joined #openstack-meeting-409:59
*** k_mouza_ has joined #openstack-meeting-410:00
*** k_mouza has quit IRC10:04
*** gcheresh_ has quit IRC10:35
*** slaweq_ has joined #openstack-meeting-410:40
*** gcheresh_ has joined #openstack-meeting-410:41
*** slaweq has quit IRC10:42
*** e0ne has joined #openstack-meeting-410:57
*** slaweq__ has joined #openstack-meeting-411:01
*** slaweq_ has quit IRC11:03
*** lkoranda has joined #openstack-meeting-411:05
*** bobmel has joined #openstack-meeting-411:17
*** psachin has joined #openstack-meeting-411:30
*** dviroel has joined #openstack-meeting-411:32
*** pcaruana has quit IRC11:37
*** e0ne has quit IRC11:42
*** pcaruana has joined #openstack-meeting-411:50
*** Liang__ has joined #openstack-meeting-412:19
*** slaweq__ has quit IRC12:19
*** slaweq__ has joined #openstack-meeting-412:23
*** e0ne has joined #openstack-meeting-412:27
*** e0ne has quit IRC12:53
*** enriquetaso has joined #openstack-meeting-413:02
*** e0ne has joined #openstack-meeting-413:05
*** slaweq has joined #openstack-meeting-413:11
*** slaweq__ has quit IRC13:13
*** smcginnis|FOSDEM is now known as smcginnis13:19
*** lpetrut has joined #openstack-meeting-413:24
*** anastzhyr has joined #openstack-meeting-413:42
*** tosky has joined #openstack-meeting-413:45
*** whoami-rajat__ has joined #openstack-meeting-413:50
*** liuyulong has joined #openstack-meeting-413:58
*** lkoranda has quit IRC13:58
*** bobmel has quit IRC13:58
*** Liang__ is now known as LiangFang14:00
whoami-rajatping jungleboyj rosmaita smcginnis tosky whoami-rajat m5z e0ne geguileo eharney walshh_ jbernard14:00
whoami-rajat#startmeeting cinder14:00
openstackMeeting started Wed Feb  5 14:00:38 2020 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
*** openstack changes topic to " (Meeting topic: cinder)"14:00
openstackThe meeting name has been set to 'cinder'14:00
m5zhi :)14:00
e0nehi14:00
whoami-rajat#topic roll call14:00
*** openstack changes topic to "roll call (Meeting topic: cinder)"14:00
enriquetasohi14:00
LiangFanghi14:01
smcginniso/14:01
*** eharney has joined #openstack-meeting-414:01
eharneyhi14:01
whoami-rajat#link https://etherpad.openstack.org/p/cinder-ussuri-meetings14:01
jungleboyjo/14:02
anastzhyrHi14:03
toskyhi14:03
whoami-rajatwill wait for 2 more minutes before the announcements14:03
whoami-rajati think we can move to announcements now14:04
whoami-rajat#topic Announcements14:04
*** openstack changes topic to "Announcements (Meeting topic: cinder)"14:04
whoami-rajatUssuri milestone-2 is next week Feb 10 - Feb 14 (specifically 13 February 2020 (23:59 UTC))14:04
whoami-rajatthat implies deadline for a new driver or a target driver14:04
whoami-rajatrequirements for a driver to be complete is working code and unit tests merged into cinder repo + working third party CI14:04
whoami-rajatadditional info in the mail14:04
whoami-rajat#link http://lists.openstack.org/pipermail/openstack-discuss/2020-January/012055.html14:04
whoami-rajatany additional comments regarding this are welcome :)14:05
whoami-rajatokay, so moving on to the next announcement14:06
whoami-rajatcode review policy for py2->py3 transition14:06
whoami-rajat#link https://review.opendev.org/#/c/703709/14:07
whoami-rajatThe main concern here is regarding backports14:07
whoami-rajatwe need to have certain guidelines for code to work when backported to stable branches14:07
whoami-rajatthis includes guidelines for features as well as bug fixes14:07
whoami-rajatlet me know if i'm going too fast but seems like no discussion is needed around this too14:09
whoami-rajatso moving on14:10
*** lkoranda has joined #openstack-meeting-414:11
*** rosmaita has joined #openstack-meeting-414:11
whoami-rajatupdate to driver removal policy14:11
whoami-rajat#link https://review.opendev.org/#/c/704906/14:11
whoami-rajatwith some discussions around this topic from past few weeks, we've finally decided to keep14:11
whoami-rajatunsupported drivers in-tree until they cause major disturbance in cinder gate then they will be removed (given they have completed the deprecation cycle)14:11
whoami-rajatadditional info is again mentioned in the patch14:11
jungleboyjThink you are doing fine whoami-rajat  :-)14:11
rosmaitao/14:11
whoami-rajatjungleboyj, thanks :D14:11
whoami-rajatrosmaita, yay!14:12
rosmaitathanks for getting the meeting going14:12
jungleboyjLooks like I need to look at the driver removal patch again.14:12
jungleboyjWill do that.14:12
whoami-rajatso we've some review requests for the final announcement14:12
whoami-rajatrosmaita, np14:12
whoami-rajathttps://review.opendev.org/#/c/704425/ - fix a unit test blocking sqlalchemy upgrade to 1.3.1314:12
whoami-rajatyou want to elaborate on this rosmaita ?14:13
rosmaitano, just that we need to merge it soon, it's blocking all of openstack from upgrading sqlalchemy14:13
rosmaitalooks like the problem was with one of our tests, not a real problem14:13
whoami-rajatok14:14
whoami-rajatso one more review request14:14
whoami-rajathttps://review.opendev.org/#/c/705362/ - open specs repo for Victoria14:14
whoami-rajatbut this is approved14:14
whoami-rajatso nevermind14:15
rosmaitaok, thanks14:15
rosmaitajust read through the scrollback, you covered everything i wanted to say14:15
rosmaitathanks whoami-rajat14:15
whoami-rajatso i think with the last announcement i can hand over to rosmaita14:15
whoami-rajatSpec freeze exception granted to "support volume-local-cache"14:15
rajiniro/14:15
whoami-rajatrosmaita, i was afraid if you had more elaborate notes and i may have missed some things, but glad to hear that, feww14:16
rosmaitasome questions came up on the spec last week before the freeze deadline14:16
rosmaitaso i wanted to carry it over before we said yes or no14:16
rosmaitawhich brings us to our next topic14:17
rosmaita#topic volume-local-cache spec14:17
*** pcaruana has quit IRC14:17
*** knomura has joined #openstack-meeting-414:17
LiangFangone Nova engineer thinks they are moving volumes to mount directly by qemu14:17
LiangFangnot mount to host os first14:18
rosmaita#link http://lists.openstack.org/pipermail/openstack-discuss/2020-January/012279.html14:18
rosmaitaalso, here's a link to the latest draft of the spec:14:18
rosmaita#link https://review.opendev.org/#/c/684556/14:18
rosmaitaso the issue is, if nova is planning to mount volumes directly by qemu, then that would completely bypass the cache in this spec, is that correct?14:19
eharneyas i understand it, consuming cinder volumes via qemu instead of attaching them to the host is still blocked by the fact that qemu doesn't support multipath14:19
eharneybut it has been a goal for a bit14:19
LiangFangrosmaita: yes14:19
smcginnisI know they've been pushing for that for a long time, but last I looked, there were multiple reasons NOT to go with direct QEMU mounting of volumes.14:20
rosmaitaso it may not be as much of a done deal as is implied in that email?14:21
eharneyi think everyone agrees it would be a better way to do things14:21
rosmaitamultipath support is kind of a big deal, though14:21
eharneyright14:21
LiangFangcurrently only rbd and sheepdog is mounting directly by qemu14:22
rosmaitaand sheepdog is no more14:22
smcginnisI don't think it supported FC either.14:22
smcginnis"The storage protocol that won't die"14:23
rosmaitaok, so my reason in putting this on the agenda is to ask: is this a reason to hold up Liang's spec?14:23
eharneyeither way, i noted a handful of concerns on this spec that are generally around the theme that there are lot of things you have to account for when getting into the data path of cinder volumes that don't seem to be sorted out thoroughly yet14:23
eharneythe encryption layering one being one of the biggest issues in my mind14:23
rosmaitayes, it is definitely important to get that right14:24
rosmaitaeharney: did you see the update yet? does it address your concerns?14:24
eharneyi haven't looked at it yet14:24
rosmaitaok14:24
*** psachin has quit IRC14:25
rosmaitai think other than encryption, the major concern is migration?14:25
eharneyyes, it's not been clear to me whether migration actually works, one of the nova folks pointed out that you have to have a mechanism to flush the cache during migration14:26
smcginnisSeems important.14:26
eharneyotherwise you leave a bunch of data in the cache on the source host during migration that doesn't show up on the destination host14:27
eharneywhich would not work at all14:27
rosmaitawhich would be a bummer indeed14:27
LiangFangwrite-back mode is working like this14:28
rosmaitamy feeling is that these major issues need to be understood at the spec stage, we probably shouldn't try to work them out in the implementation phase14:28
LiangFangwrite-through mode will not :)14:28
rosmaitaso the "safe modes" should migrate OK?14:29
LiangFangyes14:29
LiangFangnot dirty data in cache in write-through mode14:29
rosmaitaok, and the current proposal is that we would only support "safe modes"14:29
eharneythe same issue crops up with consistency groups14:29
whoami-rajatflushing shouldn't be necessary in write-through right?14:29
LiangFangyes14:29
LiangFangno need flush for write-through14:30
eharneyand snapshots "work" but may surprise users14:30
eharney(all for the same reason)14:30
rosmaitajust to make sure i understand14:31
LiangFangin write-through/safe mode, every write io will go to backend14:31
rosmaitain the "safe modes", snapshots should be OK, right?14:31
eharneyyes all of this would work normally in write-through mode14:31
LiangFangin safe modes, cache just like read only cache14:31
rosmaitaok, just wanted to be clear14:31
rosmaitaso the problem is that to get the best benefit from the cache, all sorts of stuff could break14:32
rosmaitamigrations, snapshots14:32
rosmaitais restricting to safe caching modes too big a restriction?14:32
rosmaitawhat i mean is14:32
eharneymigrations would break, snapshots would succeed but have older data in them than you expected14:32
rosmaitaright, and across a group, wouldn't necessarily be consistent any more14:33
eharneyright14:33
rosmaitagood thing we just call them "groups" now :)14:33
eharneywell that's a separate thing14:33
rosmaitayeah, that was a bad joke14:34
jungleboyjThis is sounding concerning.14:34
LiangFangeharney: I still don't understand why older data than expected14:34
LiangFangno newer data in cache in any moment14:34
eharneybecause snapshots are performed on the backend, and a snapshot will be created for the volume with the data that was written there but data in the cache (which the user of the instance thinks has been written there) won't be in the snap  (when in write-back mode)14:35
*** hemna has joined #openstack-meeting-414:36
LiangFangwrite-back mode yes, safe-mode will not14:36
eharneyright14:36
LiangFang:)14:36
whoami-rajatalso i couldn't find any usecase regarding the write-back cache in the spec14:36
LiangFangwill not support write-back mode14:36
rosmaitai asked LiangFang to remove it because we aren't supporting it any more14:36
LiangFangceph support client side read only cache14:37
*** bobmel has joined #openstack-meeting-414:37
whoami-rajatoh ok14:37
LiangFangbut it is using DRAM as cache14:37
LiangFangvolume local cache is just something like read-only cache14:37
LiangFangbut using persistent memory or fast ssd14:38
LiangFangI know ceph read only cache is a new feature just developed14:38
rosmaitahere's a question: could we support the "unsafe" modes later on by, for example, making sure cache is flushed before a snapshot, or would that start to get too complicated?  what i mean is, is there a theoretical reason other than complexity for why this couldn't be done reliably?14:39
eharneyit definitely could be done, we just need hooks to request cache flushes in the right places14:39
rosmaitaso it could be possible to implement this in phases14:40
jungleboyjIs there a reason for the urgency here if there is a safer way to get this done?14:40
rosmaitabecasuse i think operators and users are going to want to use the unsafe caching modes14:40
eharneywould be interesting to know how widely used writeback caching is in Nova now14:40
rosmaitajungleboyj: mainly that we have to coordinate with nova to get it to actually work14:41
jungleboyjOk.  Then we need to make sure to implement it so that they don't shoot themselves in the foot in the process.  :-)14:41
rosmaitaso nova doesn't want to approve changes unless we have approved it on our side14:41
smcginnisSounds like we may need some cross-team meetings to work through all the intracasies like we had to do with multiattach.14:41
rosmaitasmcginnis: ++14:42
jungleboyjsmcginnis:  ++14:42
LiangFangok14:42
LiangFang++14:42
rosmaitaok, i think this is worth pursuing during this cycle?  anyone disagree?14:43
eharneyseems worthwhile14:44
rosmaitawhat i mean is, having the discussions, not waiting for the PTG14:44
jungleboyjSeems worthwhile if we can do it in a safe manner.14:44
eharneyi would be much happier if we also had another reference implementation like dm-cache to test along with it, but that's probably dreaming too much :)14:44
rosmaitaok, i think the next move is to have a bluejeans conference14:44
LiangFangeharney: dan from Nova mentioned dm-crypt not working14:45
rosmaitacould i get names of people who definitely would want to attend and their time zones14:45
rosmaitawill help me offer some choices on a poll14:45
rosmaitafor meeting day/time14:46
eharneyi can (EST)14:46
LiangFangeharney: main issue is: the backend volume should not containing any metadata14:46
whoami-rajatIST +05:30 UTC14:46
rosmaitaEST14:46
LiangFangUTC+814:46
eharneyi think encrypted volumes already don't follow that, but we can figure it out later14:47
LiangFangok14:48
rosmaitaok, i'll look on the nova spec and see who's commented14:48
LiangFangthanks14:48
rosmaitai'll get a poll out later today or early tomorrow14:48
rosmaita#topic resource_filters response is inaccurate14:49
rosmaitathis was implemented before i started working on cinder14:49
rosmaitacontext for this is whoami-rajat's patch fixing a problem in the volume-transfers API14:50
rosmaitawhat i noticed is this14:50
*** links has quit IRC14:50
rosmaita#link https://docs.openstack.org/api-ref/block-storage/v3/?expanded=list-resource-filters-detail#resource-filters14:50
whoami-rajat#link https://review.opendev.org/#/c/703658/14:50
rosmaitathat's what our resource_filters response gives you14:50
rosmaitaand actually, what most people would see is really this:14:51
rosmaita#link https://opendev.org/openstack/cinder/src/branch/master/etc/cinder/resource_filters.json14:51
rosmaitabecause it was designed to be operator-configurable14:51
rosmaitaaccording to the api-ref, the value of the "resource" element is supposed to be "Resource which the filters will be applied to"14:51
rosmaitathe problem is that all resources mentioned in our API URIs are plural ("volumes", "snapshots", "backups") whereas all the resources in the file are singular ("volume", "snapshot", "backup")14:52
rosmaitain some ways, this is a minor point14:52
rosmaitabut i also noticed that the volume-transfers API doesn't implement the resource_filters framework that (most) of the other list-resource calls do14:52
rosmaitaso, we should get the volume_transfers into that resource_filters response14:53
rosmaitawhich brings up the question: "volume-transfer" or "volume-transfers"14:53
rosmaita(yes, it's a hyphen '-', not an underscore14:53
rosmaita)14:53
rosmaitaone issue is, how can we change the resource_filters response?14:54
rosmaitabut, my take is that since it was designed to be configurable, the response doesn't have to be microversioned14:54
rosmaitathat is, whether it's available or not can be microversioned (i think it may be)14:55
rosmaitabut we can correct the response without a new mv14:55
whoami-rajati feel we should remove the plural 's', that makes more sense, volume name vs volumes name14:55
rosmaita(sorry, i kind of obsess over API issues)14:55
rosmaitawell, the URL paths are all plural14:56
hemnaI'm not sure what value we would get out of a change like that14:56
rosmaitaand the question is, what does the filter apply to?14:56
hemnait's been that way forever14:56
hemnafor better or worse14:56
rosmaitait's just kind of weird that we're giving a list of filters you can use, but there's no actual resource with that name14:57
rosmaitabut i can live with it if it doesn't bother anyone else14:57
eharneyi would agree it doesn't need to be microversioned, but tempest definitely disagreed with me the last time i went down a similar path14:57
eharneyit is odd14:57
hemnawouldn't that change break a lot of clients expecting volumes vs volume ?14:57
eharneywouldn't those clients already break if you just removed the config for volume now?14:58
rosmaitaso the change is to what shows up in the resource_filters response, not the API path14:58
rosmaitai think programmatically, what you would want to do is match the resource in the URL to the resource listed in the response14:58
eharneyi don't think the API says it must contain any particular field like "volume"14:58
rosmaitaright now, you have to know to remove the 's' to find it14:58
hemnaright, but they would be expecting the key volumes in the response and if we changed it to volume, that would break their app/client/call/expectation14:58
smcginnis1 minute14:59
rosmaitahemna: the other way around, but i get your point14:59
rosmaitaok, we can continue this later, looks like some pro, some con14:59
rosmaitawill have to be worked out14:59
rosmaitaand i have prevented open discussion again15:00
whoami-rajatThanks rosmaita15:00
rosmaitaanyone with other issues, please move over to cinder channel15:00
rosmaitawhoami-rajat: i think you need to end the meeting15:00
whoami-rajatoh ok15:00
whoami-rajat#endmeeting15:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"15:00
openstackMeeting ended Wed Feb  5 15:00:49 2020 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/cinder/2020/cinder.2020-02-05-14.00.html15:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/cinder/2020/cinder.2020-02-05-14.00.txt15:00
openstackLog:            http://eavesdrop.openstack.org/meetings/cinder/2020/cinder.2020-02-05-14.00.log.html15:00
*** lkoranda has quit IRC15:01
*** tosky has left #openstack-meeting-415:01
*** eharney has quit IRC15:13
*** e0ne has quit IRC15:15
*** dwalt has joined #openstack-meeting-415:16
*** knomura has quit IRC15:20
*** lpetrut has quit IRC15:28
*** pcaruana has joined #openstack-meeting-415:49
*** e0ne has joined #openstack-meeting-415:53
*** gcheresh_ has quit IRC16:00
*** bobmel has quit IRC16:04
*** bobmel has joined #openstack-meeting-416:05
*** rosmaita has left #openstack-meeting-416:07
*** bobmel has quit IRC16:10
*** e0ne has quit IRC16:17
*** slaweq_ has joined #openstack-meeting-416:23
*** slaweq has quit IRC16:25
*** psachin has joined #openstack-meeting-416:38
*** e0ne has joined #openstack-meeting-416:56
*** psachin has quit IRC16:58
*** e0ne has quit IRC17:05
*** e0ne has joined #openstack-meeting-417:09
*** evrardjp has quit IRC17:33
*** evrardjp has joined #openstack-meeting-417:34
*** igordc has joined #openstack-meeting-417:34
*** k_mouza_ has quit IRC17:35
*** anastzhyr has quit IRC17:42
*** whoami-rajat__ has quit IRC17:52
*** addyess has joined #openstack-meeting-418:00
*** johnsom has quit IRC18:03
*** johnsom has joined #openstack-meeting-418:03
*** lathiat has quit IRC18:22
*** lathiat has joined #openstack-meeting-418:22
*** e0ne has quit IRC18:27
*** niedbalski has quit IRC18:32
*** niedbalski has joined #openstack-meeting-418:33
*** ralonsoh has quit IRC18:54
*** jamespage has quit IRC18:56
*** jamespage has joined #openstack-meeting-418:56
*** gcheresh_ has joined #openstack-meeting-419:10
*** gcheresh_ has quit IRC19:32
*** anastzhyr has joined #openstack-meeting-419:35
*** LiangFang has quit IRC19:36
*** niedbalski has quit IRC19:37
*** lathiat has quit IRC19:38
*** jamespage has quit IRC19:39
*** johnsom has quit IRC19:40
*** bobmel has joined #openstack-meeting-419:45
*** enriquetaso has quit IRC19:57
*** e0ne has joined #openstack-meeting-420:36
*** e0ne has quit IRC20:44
*** slaweq_ has quit IRC20:44
*** slaweq_ has joined #openstack-meeting-420:47
*** slaweq_ has quit IRC21:15
*** gcheresh_ has joined #openstack-meeting-421:24
*** liuyulong has quit IRC21:41
*** bobmel has quit IRC21:47
*** gcheresh_ has quit IRC21:58
*** k_mouza has joined #openstack-meeting-422:29
*** k_mouza has quit IRC22:30
*** anastzhyr has quit IRC22:32
*** bobmel has joined #openstack-meeting-422:55
*** lathiat has joined #openstack-meeting-423:00
*** niedbalski has joined #openstack-meeting-423:00
*** jamespage has joined #openstack-meeting-423:01
*** johnsom has joined #openstack-meeting-423:02

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!