14:00:08 <whoami-rajat> #startmeeting cinder
14:00:08 <opendevmeet> Meeting started Wed Jul 12 14:00:08 2023 UTC and is due to finish in 60 minutes.  The chair is whoami-rajat. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:08 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:08 <opendevmeet> The meeting name has been set to 'cinder'
14:00:10 <whoami-rajat> #topic roll call
14:00:24 <enriquetaso> hell
14:00:27 <enriquetaso> hello*
14:00:35 <Tony_Saad> hi
14:00:37 <simondodsley> o/
14:00:45 <crohmann> o/
14:00:55 <rosmaita> o/
14:00:58 <caiqilong> o/
14:01:42 <thiagoalvoravel> o/
14:01:47 <jungleboyj> o/
14:01:48 <keerthivasansuresh> o/
14:01:48 <jbernard> o/
14:01:55 <felipe_rodrigues> o/
14:01:57 * jungleboyj has returned from Covid hell.
14:02:09 <rosmaita> welcome back!
14:02:13 <nahimsouza[m]> o/
14:02:15 <MatheusAndrade[m]> o/
14:02:17 <luizsantos[m]> o/
14:02:20 <helenadantas[m]> o/
14:02:32 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-bobcat-meetings
14:05:12 <enriquetaso> jungleboyj++
14:05:53 <whoami-rajat> jungleboyj, good to know that you are feeling better
14:05:56 <whoami-rajat> good turnout
14:05:59 <whoami-rajat> let's get started
14:06:02 <whoami-rajat> #topic announcements
14:06:11 <jungleboyj> Thanks.  :-)
14:06:25 <whoami-rajat> :)
14:06:28 <whoami-rajat> first, PTG Registration
14:06:37 <whoami-rajat> next PTG for CC cycle is going to be virtual
14:06:45 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-July/034363.html
14:07:57 <jungleboyj> Yay!
14:08:05 <whoami-rajat> Date: October 23-27, 2023
14:08:28 <whoami-rajat> Registration link
14:08:29 <whoami-rajat> #link https://ptg2023.openinfra.dev/
14:10:05 <whoami-rajat> so please register if you are planning to be there
14:10:12 <whoami-rajat> there is no fee since it's a virtual event
14:10:19 <whoami-rajat> but the registration is going to act as a head count
14:10:28 <whoami-rajat> next, Extended Driver merge Deadline
14:10:34 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-July/034360.html
14:11:04 <whoami-rajat> since the drivers lacked review, we planned to extend the deadline to this week
14:11:10 <whoami-rajat> New deadline is 14th July
14:11:17 <whoami-rajat> here is the list of drivers
14:11:19 <whoami-rajat> #link https://etherpad.opendev.org/p/cinder-2023-2-bobcat-drivers
14:11:50 <whoami-rajat> let's discuss the driver status
14:11:54 <whoami-rajat> 1. TOYOU NetStor TYDS
14:12:01 <whoami-rajat> #link https://review.opendev.org/c/openstack/cinder/+/886942
14:12:13 <whoami-rajat> It's already reviewed by Eric Brian and I
14:12:25 <whoami-rajat> since I was a late reviewer, the comments from Eric and Brian are addressed
14:12:38 <rosmaita> (you can see that we all caught different issues, which is why it's good to have multiple reviewers!)
14:12:48 <whoami-rajat> thanks caiqilong for quickly responding to the feedback
14:12:55 <whoami-rajat> rosmaita++
14:13:24 <caiqilong> thanks for reviewing it.
14:13:26 <whoami-rajat> right now my main issue was the client isn't tested and could have some syntax error that might fail the driver operation
14:13:31 <whoami-rajat> so we should have some code coverage there
14:13:39 <whoami-rajat> I'm OK if we do that as a followup
14:13:50 <whoami-rajat> but wanted a confirmation on it
14:14:55 <caiqilong> yes, I will do it as a followup.
14:15:31 <rosmaita> a followup sounds ok to me
14:15:58 <whoami-rajat> great thanks
14:16:09 <caiqilong> I have a question about "the client isn't tested"
14:16:21 <whoami-rajat> sure, go ahead
14:16:31 <caiqilong> is it the code coverage test?
14:16:33 <jungleboyj> That is consistent with what we have done in the past.  We have allowed for test coverage to be improved after merge.
14:16:56 <rosmaita> code coverage: https://6d3a6db80c7cdddc63a0-6a3155c041608419d4d57ad6e32791fa.ssl.cf1.rackcdn.com/886942/10/check/cinder-code-coverage/5118ae5/cover/
14:17:21 <rosmaita> i guess this is the main file: https://6d3a6db80c7cdddc63a0-6a3155c041608419d4d57ad6e32791fa.ssl.cf1.rackcdn.com/886942/10/check/cinder-code-coverage/5118ae5/cover/d_36619f643b600f10_tyds_client_py.html
14:18:21 <caiqilong> thanks, I will recheck later.
14:18:37 <whoami-rajat> caiqilong, yes, once you start writing test for it, the red lines in the coverage (as shared by rosmaita ) will disappear
14:18:38 <caiqilong> what about the translation.
14:19:05 <rosmaita> caiqilong: do you know how to find the coverage results in gerrit?
14:19:57 <rosmaita> you can also run them  locally with 'tox -e cover'
14:20:01 <caiqilong> I think open then zuul check result link
14:20:14 <rosmaita> that's right
14:20:44 <rosmaita> this was my comment about the logs: About the En/Zh logs ... we should discuss at the cinder meeting. The logs aren't translated for openstack because operators decided that it was better to have log messages in English only to make it more likely that you'd find something when searching the web. But I can see how it would be useful to have logs in a local language also for day-to-day monitoring of the system.
14:20:55 <caiqilong> what the rate should the driver reach?
14:21:45 <rosmaita> caiqilong: we don't have a specific rate, the idea is to add tests that make sense to make sure that the requests you are sending (for any complicated requests) look right
14:22:13 <rosmaita> the reason is so that when people fix bugs, if they make a bad change, it can get caught by the tests
14:23:09 <rosmaita> so like if you pass a pagination parameter or something to your client, make sure that it appears correctly in the request the client makes
14:24:38 <caiqilong> rosmaita: thanks, I will added proper unit tests for client too.
14:25:12 <rosmaita> but back to the logs, if you want to have messages appear *both* in en and zh, i don't think there's a way to do that without having 2 loggers like you do in your patch
14:25:22 <rosmaita> i could be wrong about that, though
14:25:51 <whoami-rajat> rosmaita, in our driver checklist, it says that warning logs should be translated
14:26:32 <rosmaita> i thought there was a hacking rule to make sure the _() doesn't occur in logging contexts?
14:26:33 <whoami-rajat> rosmaita, also if we use i18n, doesn't the log get translated? I mean does it never get translated or it's optional?
14:27:04 <whoami-rajat> All exception messages that could be raised to users should be marked for translation with _()
14:27:37 <rosmaita> that's correct
14:28:02 <caiqilong> about the log, I think we should translate on the storage system better, instead of cinder driver.
14:28:16 <jungleboyj> whoami-rajat:  Right
14:29:03 <rosmaita> it   C312 = checks:no_translate_logs
14:30:06 <rosmaita> caiqilong: is your goal to have both en and zh messages in the logs?
14:31:17 <caiqilong> it not a mandatory requirements
14:31:45 <rosmaita> i see how it could be useful, though
14:32:06 <caiqilong> is "C312 = checks:no_translate_logs" a checklist items from the checklist link?
14:32:46 <rosmaita> https://opendev.org/openstack/cinder/src/branch/master/tox.ini#L273
14:33:02 <rosmaita> it's a rule that is checked when you run 'tox -e pep8'
14:33:58 <rosmaita> it is slightly different from what you are doing, you are using two non-translated logs, one in en and one in zh
14:34:07 <caiqilong> I am going to log en logs to the storage systems and then when the ci systems display logs, the storage system select to translate it to zh or not.
14:35:21 <rosmaita> if you can do that, it would be better
14:35:58 <caiqilong> yes, I will discard the zh one.
14:36:07 <whoami-rajat> thanks rosmaita and caiqilong for the discussion
14:36:24 <rosmaita> sounds good, we can always revisit this if necessary later
14:36:33 <whoami-rajat> if everything is addressed, i think we should move on with the meeting since we have more topics to discuss
14:37:03 <whoami-rajat> yes, we can always continue the discussion in #openstack-cinder
14:37:31 <caiqilong> sorry for the delay, thanks.
14:37:48 <whoami-rajat> no problem, good to see the issues are addressed
14:37:52 <whoami-rajat> let's move to the second driver
14:37:53 <whoami-rajat> 2. Yadro FC Driver
14:38:00 <whoami-rajat> #link https://review.opendev.org/c/openstack/cinder/+/876743
14:38:11 <whoami-rajat> my comments are addressed and I will revisit it
14:38:31 <whoami-rajat> it's in a pretty good state since a lot of common code already existed
14:39:27 <whoami-rajat> but would be good to get more eyes on it
14:40:21 <whoami-rajat> 3. Lustre Driver
14:40:34 <whoami-rajat> #link https://review.opendev.org/q/topic:bp%252Fadd-lustre-driver
14:40:40 <whoami-rajat> I don't see the CI reporting yet
14:40:51 <whoami-rajat> so we might postpone it to the next cycle
14:41:20 <jbernard> o/ i can look at the yadro driver today
14:42:04 <whoami-rajat> great, thanks!
14:43:02 <whoami-rajat> last announcement, EM discussion
14:43:16 <whoami-rajat> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-July/034350.html
14:43:22 <whoami-rajat> TC started the discussion on the ML
14:43:30 <whoami-rajat> there is also a patch proposed to the governance project
14:43:38 <whoami-rajat> #link https://review.opendev.org/c/openstack/governance/+/887966
14:43:45 <rosmaita> and a long discussion at the TC meeting yesterday
14:44:13 <whoami-rajat> #link https://etherpad.opendev.org/p/openstack-exteneded-maintenance
14:44:45 <whoami-rajat> haven't followed it, i will take a look at the discussion
14:44:54 <rosmaita> real quickly:
14:45:15 <rosmaita> we will stop calling them "extended maintenance", probably will be "unsupported"
14:45:28 <rosmaita> they won't have 'stable' in the branch name
14:45:51 <simondodsley> yea
14:45:51 <rosmaita> there will be a separate set of cores to watch those branches
14:46:03 <rosmaita> (which could be cinder-core, but doesn't have to be)
14:46:25 <rosmaita> the projects do not have *any* responsibility for those un-stable branches
14:46:58 <rosmaita> kristi should have updates to the proposals posted today
14:47:19 <whoami-rajat> thanks Brian, that sounds like a lot of improvement from our current model
14:47:34 <rosmaita> i'm not real clear on how EOL will work, though
14:48:12 <rosmaita> (that's all)
14:48:29 <whoami-rajat> EOL should work the same right? no branch in gerrit and final release tagged?
14:48:37 <whoami-rajat> as branch-EOL
14:49:08 <rosmaita> well, there's the coordination across projects issue still
14:49:56 <rosmaita> but the key thing is it will be clear that even if the branches exist, we have no responsibility/make no guarantees about them
14:50:02 <whoami-rajat> yeah, i thought TC/release team will do the coordination instead of project teams doing it individually, maybe project teams could vote on the branch
14:50:28 <whoami-rajat> rosmaita, yes, that's the major improvement, and addresses our bandwidth concerns
14:50:42 <rosmaita> yep
14:51:21 <whoami-rajat> cool
14:51:37 <whoami-rajat> anyone who is interested, feel free to follow the discussion in above links
14:51:42 <whoami-rajat> let's move to topics
14:51:50 <whoami-rajat> that's all for announcements
14:51:55 <whoami-rajat> #topic [Telemetry] How to properly deliver cinder metrics to Ceilometer
14:52:00 <whoami-rajat> crohmann, that's you
14:52:40 <crohmann> short and sweet: could someone who knows please clarify my question (on the ML?), I will then gladly push a documentation patch.
14:53:07 <crohmann> (to fix the documentation bug I opened)
14:56:13 <whoami-rajat> I'm not sure about that
14:56:17 <enriquetaso> email: https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034192.html
14:56:21 <enriquetaso> #link https://lists.openstack.org/pipermail/openstack-discuss/2023-June/034192.html
14:56:26 <whoami-rajat> if anyone uses telemetry in their deployment, please take a look ^
14:56:53 <whoami-rajat> thanks enriquetaso
14:57:02 <zaitcev> Wait, anyone still uses Ceilometer?!
14:57:07 <whoami-rajat> #link https://bugs.launchpad.net/ceilometer/+bug/2024475
14:57:08 <whoami-rajat> bug report ^
14:57:26 <whoami-rajat> zaitcev, that's a good question
14:57:45 <zaitcev> Everyone I know have switched to Prometheus (sometimes behind statsd for collection).
14:57:48 <whoami-rajat> crohmann, do you want to quickly discuss your other topics or should we move them to next week?
14:58:15 <crohmann> The next two are simply about me asking for help :-)
14:58:17 <rosmaita> crohmann: from the cinder side, your point (2) makes sense to me
14:58:28 <rosmaita> (point (2) in your email)
14:58:53 <whoami-rajat> crohmann, ack
14:58:59 <whoami-rajat> so i will quickly mention the next 2 topics
14:59:09 <whoami-rajat> #link [Cinder-Backup]  Spec to introduce a backup_status field for volumes and a split-up of the backup status away from the volume_status
14:59:28 <whoami-rajat> if there is anyone interested in implementing this, please contact crohmann and he can help you out with the details
14:59:38 <whoami-rajat> #topic  [Cinder-Backup]  Spec to introduce a backup_status field for volumes and a split-up of the backup status away from the volume_status
14:59:45 <whoami-rajat> sorry my bad, not a link but it's a topic
14:59:57 <whoami-rajat> #topic [Cinder-Backup] Performance issues with chunked driver
15:00:17 <whoami-rajat> same with this one, if anyone would like to work on this, help is appreciated
15:00:33 <zaitcev> I think jbernard staked the backup performance.
15:00:56 <whoami-rajat> yes, correct
15:01:04 <whoami-rajat> i think jbernard is around today
15:01:05 <crohmann> I also have the Enrico Bocchi from Cern who observed the same issues.
15:03:23 <whoami-rajat> we're over time
15:03:29 <crohmann> zaitcev: Regarding Ceilometer. I don't want to add any fuel to any EoL discussions, but Ceilometer is communicated as THE telemetry solution for OpenStack. We love Prometheus and use it, but question is what a proper data source for OpenStack would be? https://github.com/openstack-exporter/openstack-exporter?
15:03:29 <whoami-rajat> thanks everyone for attending
15:03:38 <whoami-rajat> and please take a look at review request section
15:03:54 <jungleboyj> Thanks whoami-rajat !
15:03:56 <whoami-rajat> we can continue the discussion in #openstack-cinder after BS meeting
15:03:59 <whoami-rajat> #endmeeting