whoami-rajat | #startmeeting cinder | 14:00 |
---|---|---|
opendevmeet | Meeting started Wed Jun 14 14:00:20 2023 UTC and is due to finish in 60 minutes. The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot. | 14:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 14:00 |
opendevmeet | The meeting name has been set to 'cinder' | 14:00 |
enriquetaso | hi | 14:00 |
whoami-rajat | #topic roll call | 14:00 |
yuval | yo | 14:00 |
IPO_ | hi | 14:00 |
whoami-rajat | #link https://etherpad.opendev.org/p/cinder-bobcat-meetings | 14:01 |
whoami-rajat | some of the folks are at Vancouver PTG so we might have less attendance | 14:02 |
MatheusAndrade[m] | o/ | 14:02 |
helenadantas[m] | o/ | 14:02 |
luizsantos[m] | o/ | 14:02 |
tosky | o/ | 14:03 |
thiagoalvoravel | o/ | 14:03 |
whoami-rajat | good attendance | 14:04 |
whoami-rajat | let's get started | 14:04 |
Tony_Saad | 0/ | 14:04 |
happystacker | Hello | 14:04 |
whoami-rajat | #topic announcements | 14:05 |
whoami-rajat | first, Cinder PTG Schedule | 14:05 |
whoami-rajat | #link https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034056.html | 14:05 |
whoami-rajat | unfortunately i wasn't able to travel to vancouver but Brian is taking care of the PTG which is really great | 14:05 |
whoami-rajat | he sent out the schedule to ML | 14:05 |
whoami-rajat | cinder is going to have 3 sessions for PTG given the less number of people attending | 14:06 |
whoami-rajat | Wednesday | 14:06 |
whoami-rajat | 10:20-10:50 Support for NVMe-OF in os-brick | 14:06 |
whoami-rajat | 15:50-16:20 Cinder Operator Half-Hour | 14:06 |
whoami-rajat | Thursday | 14:06 |
whoami-rajat | 16:40-17:10 Open Discussion with the Cinder project team | 14:06 |
whoami-rajat | since Vancouver is UTC-7, i think it will start later today | 14:06 |
whoami-rajat | I don't have info if there is a virtual thing planned but let's see if we get a summary or notes out of the sessions | 14:06 |
happystacker | good | 14:07 |
whoami-rajat | next, Cinder Operators event | 14:07 |
whoami-rajat | #link https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034057.html | 14:07 |
happystacker | yeah it'd be great if we can have the minutes | 14:07 |
whoami-rajat | happystacker, yep, i can ask Brian if he can write those down from the sessions (if I'm able to make contact with him) | 14:08 |
whoami-rajat | maybe just drop a mail | 14:08 |
happystacker | ok cool, thanks rajat | 14:08 |
whoami-rajat | np | 14:08 |
whoami-rajat | so same sessions are also good for operators to attend | 14:08 |
whoami-rajat | but we also have additional forum session | 14:08 |
whoami-rajat | Forum session: | 14:09 |
whoami-rajat | Cinder, the OpenStack Block Storage service ... how are we doing? | 14:09 |
whoami-rajat | Looking for feedback from operators, vendors, and end-users | 14:09 |
whoami-rajat | #link https://etherpad.opendev.org/p/cinder-vancouver-forum-2023 | 14:09 |
whoami-rajat | Timing: 1840 UTC - 1910 UTC | 14:09 |
whoami-rajat | that's all about the Vancouver summit from Cinder perspective | 14:10 |
whoami-rajat | next, Spec freeze (22nd June) | 14:10 |
whoami-rajat | #link https://etherpad.opendev.org/p/cinder-2023-2-bobcat-specs | 14:10 |
whoami-rajat | we've the cinder spec deadline upcoming | 14:11 |
whoami-rajat | i.e. 22nd June | 14:11 |
whoami-rajat | I've created the above etherpad to track the specs | 14:11 |
whoami-rajat | majorly the first two need reviews | 14:11 |
happystacker | Considering the bandwidth we have and the current date, I don't tkink we'll make it for https://review.opendev.org/c/openstack/cinder-specs/+/872019 | 14:12 |
happystacker | should be postponed to C release | 14:12 |
whoami-rajat | happystacker, do you mean the developer bandwidth or the reviewer bandwidth? I'm assuming the former | 14:13 |
happystacker | dev perspective | 14:13 |
whoami-rajat | if you feel it is hard to complete this cycle, we can surely push for next cycle | 14:13 |
happystacker | yeah that's what I mean to say | 14:14 |
happystacker | this piece of good chunk of work | 14:14 |
whoami-rajat | sure, I will add a W-1 stating our meeting discussion and we can come back to it next cycle | 14:14 |
whoami-rajat | thanks for the heads up happystacker | 14:15 |
happystacker | np, sorru for that | 14:15 |
whoami-rajat | no worries | 14:16 |
whoami-rajat | I've added a comment | 14:16 |
whoami-rajat | so we only have 1 spec to review now | 14:16 |
whoami-rajat | other one is just a reproposal | 14:16 |
whoami-rajat | #link https://review.opendev.org/c/openstack/cinder-specs/+/868761 | 14:17 |
whoami-rajat | #link https://review.opendev.org/c/openstack/cinder-specs/+/877230 | 14:17 |
whoami-rajat | if anyone needs a quick link ^ | 14:17 |
whoami-rajat | next, Milestone-2 (06 July) | 14:17 |
whoami-rajat | #link https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034062.html | 14:17 |
whoami-rajat | #link https://releases.openstack.org/bobcat/schedule.html#b-mf | 14:18 |
whoami-rajat | we have Milestone 2 upcoming | 14:18 |
whoami-rajat | along with which we have the volume and target driver merge deadline | 14:18 |
whoami-rajat | I've created an etherpad to track the drivers for this cycle | 14:18 |
whoami-rajat | #link https://etherpad.opendev.org/p/cinder-2023-2-bobcat-drivers | 14:18 |
whoami-rajat | so far I've added the Yadro FC driver and the Lustre driver | 14:18 |
whoami-rajat | but if you are planning to propose of have proposed any driver for 2023.2 Bobcat cycle, please add it to the list | 14:19 |
happystacker | new driver you mean to say? | 14:19 |
whoami-rajat | yes | 14:19 |
whoami-rajat | new volume and target drivers | 14:20 |
happystacker | ok nothing new from our side for cinder | 14:20 |
happystacker | thks | 14:20 |
whoami-rajat | ack, good to know | 14:20 |
whoami-rajat | ok, last announcement, Cinder Incremental backup working | 14:21 |
whoami-rajat | just for general awareness | 14:21 |
whoami-rajat | if anyone has doubts how the drivers inheriting from chunkedbackupdriver does incremental backups | 14:21 |
whoami-rajat | Gorka has an article written about it | 14:21 |
whoami-rajat | some of the info might be dated but the incremental mechanism should still be the same (i just took a quick glance and it looks same) | 14:22 |
whoami-rajat | it came up on ML so thought others might have a doubt about this | 14:22 |
whoami-rajat | #link https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034098.html | 14:22 |
happystacker | oh cool, will have a look | 14:22 |
whoami-rajat | link to gorka's article | 14:22 |
whoami-rajat | #link #link https://web.archive.org/web/20160407151329/http://gorka.eguileor.com/inside-cinders-incremental-backup | 14:22 |
enriquetaso | ++ | 14:23 |
whoami-rajat | that's all for announcements | 14:25 |
whoami-rajat | we also don't have any topic today | 14:25 |
whoami-rajat | let's move to open discussion | 14:25 |
whoami-rajat | #topic open discussion | 14:25 |
whoami-rajat | reminder to take a look at the review request patches | 14:25 |
IPO_ | I have one question if you don't mind | 14:26 |
whoami-rajat | sure | 14:27 |
Tony_Saad | i would like to discuss https://review.opendev.org/c/openstack/oslo.privsep/+/884344 | 14:27 |
IPO_ | Well, my question is described in https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034042.html | 14:28 |
IPO_ | I'm able to provide additional comments if needed. Main question - is it known issue and are there any plans to work with it | 14:30 |
enriquetaso | #link HPE STORAGE CI | 14:30 |
enriquetaso | sorry | 14:30 |
enriquetaso | https://bugs.launchpad.net/cinder/+bug/2003179/comments/7 | 14:30 |
whoami-rajat | Tony_Saad, sure, what's your question | 14:31 |
Tony_Saad | for https://review.opendev.org/c/openstack/oslo.privsep/+/884344 my patch works. I tried the way Eric described and it did not hid the password. The only way i got the password hidden is with that patch | 14:31 |
whoami-rajat | IPO_, is that related to cinder A/A or the scheduler reporting wrong stats? would like to know how many schedulers are there in your deployment? | 14:31 |
whoami-rajat | Tony_Saad, will that set all privsep logs to logging level ERROR? that would be problematic when debugging issues in deployments ... | 14:33 |
enriquetaso | out of curiosity: why the logging error was added to `oslo_privsep/daemon.py` and not `os_brick/privileged/__init__.py`? | 14:33 |
IPO_ | whoami-rajat, this issue related to A/A cinder volume and not depending on number of cinder-scheduler instances. | 14:33 |
enriquetaso | oh sorry, because it's using the logger from oslo_log | 14:35 |
Tony_Saad | whoami-rajat, no it only sets that one log line to error but because it is set to debug it pretty much skips that logger. I am open to discuss and test other ways but not sure how exactly eric wanted it done | 14:35 |
whoami-rajat | IPO_, ack, the volume driver reports the capabilities in a periodic interval (i think default is 60 seconds) to the scheduler and the get pools call returns info from scheduler cache | 14:37 |
whoami-rajat | though I'm not an expert in A/A and Gorka isn't around today | 14:37 |
eharney | this bug doesn't need to be fixed in privsep, it can be fixed from brick (i put some notes about how in the lp bug) | 14:37 |
Tony_Saad | eharney, i saw your notes and tried them but the password was still getting leaked. Possible that i did something wrong or missed something? | 14:38 |
eharney | i'd have to look at what you tried and dig into it, may be able to look at that next week | 14:39 |
eharney | the patch posted to privsep i think will disable some logging that is expected to be there | 14:39 |
Tony_Saad | Sure i can push a patch with the changes that you suggested and review | 14:40 |
Tony_Saad | but from my testing https://review.opendev.org/c/openstack/oslo.privsep/+/884344 only disabled that one log line that i changed | 14:41 |
IPO_ | rajat, thanks for comment. Yes - it leads to incorrect reporting of allocated_capacity_gb for pool - so 1. we get problem to understand of cinder allocated amount for pools and 2. some features like reservation capacity in cinder doesn't work either | 14:43 |
IPO_ | And it isn't clear, why related https://bugs.launchpad.net/cinder/+bug/1927186 is incomplete | 14:45 |
whoami-rajat | I see a similar comment from Gorka about number of schedulers being > 1 which is also my observation in some deployments | 14:46 |
whoami-rajat | if that's not the case with you, we can discuss this issue further | 14:46 |
enriquetaso | Was `incomplete` because I left a question a while ago and nobody change it after that | 14:47 |
enriquetaso | I've updated it | 14:47 |
IPO_ | No, it isnt the case as it reproduced with one cinder-scheduler either. In case of multiple instances of cinder scheduler it became more bad :) | 14:48 |
whoami-rajat | ack got it, I think even with multiple cinder volume services in A/A, there should only be one that gets data from backend and reports it to the scheduler in a periodic time (again not A/A expert) | 14:49 |
IPO_ | enriquetaso, thanks for comment - so should we reopened it or I'm able to report new one | 14:49 |
whoami-rajat | can you try lowering the time interval of reporting ? and see if the issue persists | 14:50 |
whoami-rajat | https://github.com/openstack/cinder/blob/d7ae9610d765919660a9f7a8769478f0b6e0aadf/cinder/volume/manager.py#L135-L142 | 14:51 |
whoami-rajat | i mean setting backend_stats_polling_interval to a value lower than 60 seconds | 14:51 |
enriquetaso | 1927186 is open | 14:51 |
IPO_ | rajat, each cinder-volume keeps local allocated_capacity_gb and periodically reports it to scheduler. Each time cinder volume gets new task to create volume - it increase local value and report it back to scheduler | 14:52 |
enriquetaso | I'm not sure if it's the same bug or not... open a new bug report if it's not related to 1927186 | 14:52 |
whoami-rajat | that shouldn't be the case, only scheduler keeps the pool data in cache, cinder-volume's purpose is to get the pool data and send it to scheduler | 14:53 |
happystacker | I need to drop guys, thanks for all | 14:53 |
happystacker | and happy summit for the lucky ones | 14:53 |
whoami-rajat | also the allocated_capacity_gb is increased/decreased by the scheduler only | 14:53 |
IPO_ | so I saw even negative valu of allcated capacity | 14:53 |
whoami-rajat | cinder volume shouldn't be performing any calculations on the backend stats | 14:54 |
IPO_ | Looks like it does - when it start and when it get task to create or delete volume | 14:54 |
whoami-rajat | can you show the place where you think it's performing calculations on the backend stats? | 14:55 |
IPO_ | https://github.com/openstack/cinder/blob/master/cinder/volume/manager.py#L403 | 14:57 |
whoami-rajat | IPO_, that is only done when we initialize the cinder volume host, it doesn't happen in every cinder volume create/delete operation | 14:59 |
IPO_ | Shure, that is why when we restart cinder volume - it recalculate capacity and show correct value for a while | 15:00 |
whoami-rajat | yes | 15:00 |
whoami-rajat | else c-vol shouldn't be interfering with those values | 15:00 |
whoami-rajat | anyways we are out of time | 15:00 |
whoami-rajat | would be good to discuss this again next week | 15:00 |
whoami-rajat | when we have better team bandwidth | 15:00 |
whoami-rajat | right now many team members are in the vancouver summit | 15:01 |
IPO_ | ok, thank you ! | 15:01 |
whoami-rajat | thanks for bringing this up | 15:01 |
whoami-rajat | and thanks everyone for joining | 15:01 |
whoami-rajat | #endmeeting | 15:01 |
opendevmeet | Meeting ended Wed Jun 14 15:01:29 2023 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:01 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/cinder/2023/cinder.2023-06-14-14.00.html | 15:01 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/cinder/2023/cinder.2023-06-14-14.00.txt | 15:01 |
opendevmeet | Log: https://meetings.opendev.org/meetings/cinder/2023/cinder.2023-06-14-14.00.log.html | 15:01 |
IPO_ | enriquetaso, thank you for reopening 1927186 | 15:06 |
enriquetaso | IPO_, was never close lol | 15:55 |
enriquetaso | i've just moved the status to `new` | 15:55 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!