14:00:20 <whoami-rajat> #startmeeting cinder
14:00:20 <opendevmeet> Meeting started Wed Jun 14 14:00:20 2023 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:20 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:20 <opendevmeet> The meeting name has been set to 'cinder'
14:00:24 <enriquetaso> hi
14:00:27 <whoami-rajat> #topic roll call
14:00:32 <yuval> yo
14:00:54 <IPO_> hi
14:01:43 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-bobcat-meetings
14:02:01 <whoami-rajat> some of the folks are at Vancouver PTG so we might have less attendance
14:02:17 <MatheusAndrade[m]> o/
14:02:22 <helenadantas[m]> o/
14:02:29 <luizsantos[m]> o/
14:03:18 <tosky> o/
14:03:40 <thiagoalvoravel> o/
14:04:44 <whoami-rajat> good attendance
14:04:46 <whoami-rajat> let's get started
14:04:50 <Tony_Saad> 0/
14:04:53 <happystacker> Hello
14:05:01 <whoami-rajat> #topic announcements
14:05:16 <whoami-rajat> first, Cinder PTG Schedule
14:05:21 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034056.html
14:05:42 <whoami-rajat> unfortunately i wasn't able to travel to vancouver but Brian is taking care of the PTG which is really great
14:05:47 <whoami-rajat> he sent out the schedule to ML
14:06:04 <whoami-rajat> cinder is going to have 3 sessions for PTG given the less number of people attending
14:06:11 <whoami-rajat> Wednesday
14:06:12 <whoami-rajat> 10:20-10:50  Support for NVMe-OF in os-brick
14:06:12 <whoami-rajat> 15:50-16:20  Cinder Operator Half-Hour
14:06:12 <whoami-rajat> Thursday
14:06:12 <whoami-rajat> 16:40-17:10  Open Discussion with the Cinder project team
14:06:34 <whoami-rajat> since Vancouver is UTC-7, i think it will start later today
14:06:58 <whoami-rajat> I don't have info if there is a virtual thing planned but let's see if we get a summary or notes out of the sessions
14:07:00 <happystacker> good
14:07:25 <whoami-rajat> next, Cinder Operators event
14:07:30 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034057.html
14:07:33 <happystacker> yeah it'd be great if we can have the minutes
14:08:06 <whoami-rajat> happystacker, yep, i can ask Brian if he can write those down from the sessions (if I'm able to make contact with him)
14:08:12 <whoami-rajat> maybe just drop a mail
14:08:28 <happystacker> ok cool, thanks rajat
14:08:50 <whoami-rajat> np
14:08:54 <whoami-rajat> so same sessions are also good for operators to attend
14:08:59 <whoami-rajat> but we also have additional forum session
14:09:01 <whoami-rajat> Forum session:
14:09:02 <whoami-rajat> Cinder, the OpenStack Block Storage service ... how are we doing?
14:09:02 <whoami-rajat> Looking for feedback from operators, vendors, and end-users
14:09:08 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-vancouver-forum-2023
14:09:15 <whoami-rajat> Timing: 1840 UTC - 1910 UTC
14:10:29 <whoami-rajat> that's all about the Vancouver summit from Cinder perspective
14:10:50 <whoami-rajat> next, Spec freeze (22nd June)
14:10:55 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-2023-2-bobcat-specs
14:11:06 <whoami-rajat> we've the cinder spec deadline upcoming
14:11:24 <whoami-rajat> i.e. 22nd June
14:11:33 <whoami-rajat> I've created the above etherpad to track the specs
14:11:41 <whoami-rajat> majorly the first two need reviews
14:12:21 <happystacker> Considering the bandwidth we have and the current date, I don't tkink we'll make it for https://review.opendev.org/c/openstack/cinder-specs/+/872019
14:12:36 <happystacker> should be postponed to C release
14:13:38 <whoami-rajat> happystacker, do you mean the developer bandwidth or the reviewer bandwidth? I'm assuming the former
14:13:53 <happystacker> dev perspective
14:13:53 <whoami-rajat> if you feel it is hard to complete this cycle, we can surely push for next cycle
14:14:08 <happystacker> yeah that's what I mean to say
14:14:17 <happystacker> this piece of good chunk of work
14:14:36 <whoami-rajat> sure, I will add a W-1 stating our meeting discussion and we can come back to it next cycle
14:15:31 <whoami-rajat> thanks for the heads up happystacker
14:15:47 <happystacker> np, sorru for that
14:16:15 <whoami-rajat> no worries
14:16:18 <whoami-rajat> I've added a comment
14:16:26 <whoami-rajat> so we only have 1 spec to review now
14:16:33 <whoami-rajat> other one is just a reproposal
14:17:24 <whoami-rajat> #link https://review.opendev.org/c/openstack/cinder-specs/+/868761
14:17:30 <whoami-rajat> #link https://review.opendev.org/c/openstack/cinder-specs/+/877230
14:17:34 <whoami-rajat> if anyone needs a quick link ^
14:17:46 <whoami-rajat> next, Milestone-2 (06 July)
14:17:55 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034062.html
14:18:01 <whoami-rajat> #link https://releases.openstack.org/bobcat/schedule.html#b-mf
14:18:07 <whoami-rajat> we have Milestone 2 upcoming
14:18:16 <whoami-rajat> along with which we have the volume and target driver merge deadline
14:18:39 <whoami-rajat> I've created an etherpad to track the drivers for this cycle
14:18:43 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-2023-2-bobcat-drivers
14:18:56 <whoami-rajat> so far I've added the Yadro FC driver and the Lustre driver
14:19:14 <whoami-rajat> but if you are planning to propose of have proposed any driver for 2023.2 Bobcat cycle, please add it to the list
14:19:29 <happystacker> new driver you mean to say?
14:19:59 <whoami-rajat> yes
14:20:05 <whoami-rajat> new volume and target drivers
14:20:14 <happystacker> ok nothing new from our side for cinder
14:20:19 <happystacker> thks
14:20:24 <whoami-rajat> ack, good to know
14:21:17 <whoami-rajat> ok, last announcement, Cinder Incremental backup working
14:21:23 <whoami-rajat> just for general awareness
14:21:40 <whoami-rajat> if anyone has doubts how the drivers inheriting from chunkedbackupdriver does incremental backups
14:21:49 <whoami-rajat> Gorka has an article written about it
14:22:13 <whoami-rajat> some of the info might be dated but the incremental mechanism should still be the same (i just took a quick glance and it looks same)
14:22:26 <whoami-rajat> it came up on ML so thought others might have a doubt about this
14:22:28 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034098.html
14:22:38 <happystacker> oh cool, will have a look
14:22:38 <whoami-rajat> link to gorka's article
14:22:40 <whoami-rajat> #link #link https://web.archive.org/web/20160407151329/http://gorka.eguileor.com/inside-cinders-incremental-backup
14:23:28 <enriquetaso> ++
14:25:04 <whoami-rajat> that's all for announcements
14:25:25 <whoami-rajat> we also don't have any topic today
14:25:30 <whoami-rajat> let's move to open discussion
14:25:34 <whoami-rajat> #topic open discussion
14:25:48 <whoami-rajat> reminder to take a look at the review request patches
14:26:47 <IPO_> I have one question if you don't mind
14:27:28 <whoami-rajat> sure
14:27:57 <Tony_Saad> i would like to discuss https://review.opendev.org/c/openstack/oslo.privsep/+/884344
14:28:41 <IPO_> Well, my question is described in https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034042.html
14:30:23 <IPO_> I'm able to provide additional comments if needed. Main question - is it known issue and are there any plans to work with it
14:30:45 <enriquetaso> #link HPE STORAGE CI
14:30:53 <enriquetaso> sorry
14:30:54 <enriquetaso> https://bugs.launchpad.net/cinder/+bug/2003179/comments/7
14:31:07 <whoami-rajat> Tony_Saad, sure, what's your question
14:31:29 <Tony_Saad> for https://review.opendev.org/c/openstack/oslo.privsep/+/884344 my patch works. I tried the way Eric described and it did not hid the password. The only way i got the password hidden is with that patch
14:31:31 <whoami-rajat> IPO_, is that related to cinder A/A or the scheduler reporting wrong stats? would like to know how many schedulers are there in your deployment?
14:33:53 <whoami-rajat> Tony_Saad, will that set all privsep logs to logging level ERROR? that would be problematic when debugging issues in deployments ...
14:33:56 <enriquetaso> out of curiosity: why the logging error was added to `oslo_privsep/daemon.py` and not `os_brick/privileged/__init__.py`?
14:33:57 <IPO_> whoami-rajat, this issue related to A/A cinder volume and not depending on number of cinder-scheduler instances.
14:35:11 <enriquetaso> oh sorry, because it's using the logger from oslo_log
14:35:42 <Tony_Saad> whoami-rajat, no it only sets that one log line to error but because it is set to debug it pretty much skips that logger. I am open to discuss and test other ways but not sure how exactly eric wanted it done
14:37:10 <whoami-rajat> IPO_, ack, the volume driver reports the capabilities in a periodic interval (i think default is 60 seconds) to the scheduler and the get pools call returns info from scheduler cache
14:37:26 <whoami-rajat> though I'm not an expert in A/A and Gorka isn't around today
14:37:34 <eharney> this bug doesn't need to be fixed in privsep, it can be fixed from brick  (i put some notes about how in the lp bug)
14:38:31 <Tony_Saad> eharney, i saw your notes and tried them but the password was still getting leaked. Possible that i did something wrong or missed something?
14:39:29 <eharney> i'd have to look at what you tried and dig into it, may be able to look at that next week
14:39:43 <eharney> the patch posted to privsep i think will disable some logging that is expected to be there
14:40:30 <Tony_Saad> Sure i can push a patch with the changes that you suggested and review
14:41:17 <Tony_Saad> but from my testing https://review.opendev.org/c/openstack/oslo.privsep/+/884344 only disabled that one log line that i changed
14:43:46 <IPO_> rajat, thanks for comment. Yes - it leads to incorrect reporting of allocated_capacity_gb for pool - so 1. we get problem to understand of cinder allocated amount for pools and 2. some features like reservation capacity in cinder doesn't work either
14:45:03 <IPO_> And it isn't clear, why related https://bugs.launchpad.net/cinder/+bug/1927186 is incomplete
14:46:35 <whoami-rajat> I see a similar comment from Gorka about number of schedulers being > 1 which is also my observation in some deployments
14:46:44 <whoami-rajat> if that's not the case with you, we can discuss this issue further
14:47:54 <enriquetaso> Was `incomplete` because I left a question a while ago and nobody change it after that
14:47:58 <enriquetaso> I've updated it
14:48:12 <IPO_> No, it isnt the case as it reproduced with one cinder-scheduler either. In case of multiple instances of cinder scheduler it became more bad :)
14:49:48 <whoami-rajat> ack got it, I think even with multiple cinder volume services in A/A, there should only be one that gets data from backend and reports it to the scheduler in a periodic time (again not A/A expert)
14:49:53 <IPO_> enriquetaso, thanks for comment - so should we reopened it or I'm able to report new one
14:50:07 <whoami-rajat> can you try lowering the time interval of reporting ? and see if the issue persists
14:51:12 <whoami-rajat> https://github.com/openstack/cinder/blob/d7ae9610d765919660a9f7a8769478f0b6e0aadf/cinder/volume/manager.py#L135-L142
14:51:22 <whoami-rajat> i mean setting backend_stats_polling_interval to a value lower than 60 seconds
14:51:24 <enriquetaso> 1927186 is open
14:52:16 <IPO_> rajat, each cinder-volume keeps local allocated_capacity_gb and periodically reports it to scheduler. Each time cinder volume gets new task to create volume - it increase local value and report it back to scheduler
14:52:19 <enriquetaso> I'm not sure if it's the same bug or not... open a new bug report if it's not related to 1927186
14:53:15 <whoami-rajat> that shouldn't be the case, only scheduler keeps the pool data in cache, cinder-volume's purpose is to get the pool data and send it to scheduler
14:53:18 <happystacker> I need to drop guys, thanks for all
14:53:32 <happystacker> and happy summit for the lucky ones
14:53:47 <whoami-rajat> also the allocated_capacity_gb is increased/decreased by the scheduler only
14:53:56 <IPO_> so I saw even negative valu of allcated capacity
14:54:09 <whoami-rajat> cinder volume shouldn't be performing any calculations on the backend stats
14:54:56 <IPO_> Looks like it does - when it start and when it get task to create or delete volume
14:55:38 <whoami-rajat> can you show the place where you think it's performing calculations on the backend stats?
14:57:58 <IPO_> https://github.com/openstack/cinder/blob/master/cinder/volume/manager.py#L403
14:59:19 <whoami-rajat> IPO_, that is only done when we initialize the cinder volume host, it doesn't happen in every cinder volume create/delete operation
15:00:10 <IPO_> Shure, that is why when we restart cinder volume - it recalculate capacity and show correct value for a while
15:00:31 <whoami-rajat> yes
15:00:37 <whoami-rajat> else c-vol shouldn't be interfering with those values
15:00:40 <whoami-rajat> anyways we are out of time
15:00:47 <whoami-rajat> would be good to discuss this again next week
15:00:52 <whoami-rajat> when we have better team bandwidth
15:01:00 <whoami-rajat> right now many team members are in the vancouver summit
15:01:01 <IPO_> ok, thank you !
15:01:23 <whoami-rajat> thanks for bringing this up
15:01:26 <whoami-rajat> and thanks everyone for joining
15:01:29 <whoami-rajat> #endmeeting