14:01:13 #startmeeting cinder 14:01:14 Meeting started Wed Mar 10 14:01:13 2021 UTC and is due to finish in 60 minutes. The chair is rosmaita. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:15 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:17 The meeting name has been set to 'cinder' 14:01:30 o/ 14:01:37 o/ 14:02:04 Hi 14:02:19 hi 14:02:30 hi 14:03:22 hi 14:03:26 hi 14:03:45 hi 14:05:03 hello everyone 14:05:06 hi! 14:05:25 #link https://etherpad.openstack.org/p/cinder-wallaby-meetings 14:05:31 #topic announcements 14:06:15 the Xena PTG has been scheduled 14:06:25 19-23 April 14:06:40 it's free to attend but you need to register 14:06:43 #link http://lists.openstack.org/pipermail/openstack-discuss/2021-March/020778.html 14:06:46 info ^^ 14:06:59 we need to start planning 14:07:03 #link https://etherpad.opendev.org/p/xena-ptg-cinder-planning 14:07:24 first thing is to decide what times we will meet 14:07:27 and for how long 14:07:43 i think we should stick with what we've done for the past virtual PTGs 14:07:49 +1 14:07:56 +1 14:07:57 rosmaita: ++ 14:08:00 but, i am open to ideas to accommodate people who those times don't work for 14:08:40 so if the stated times are bad for you, let me know either on the etherpad or in irc 14:08:54 the other question is about how long to meet each day 14:09:03 we've been doing 3 hour blocks, but could do 4 14:09:48 i think 3 hours has been working ok 14:10:06 3 hours sounds better to me. It's easier to dedicate a time 14:10:24 rosmaita: ++ 14:10:35 3 hours sounds good 14:11:10 3 hour works for me 14:11:15 ok, guess there's a consensus, we will stick with 3 hours 14:11:30 that was easy! 14:11:35 next item 14:11:45 :-) 14:11:45 os-brick 4.3.0 has been released and the stable/wallaby branch has been cut 14:12:06 you may have seen a complaint on the ML about the aggressive update to requirements 14:12:30 not sure there's anything we can do about that now 14:12:38 or if i want to, anyway 14:12:54 key point for this meeting is that if you find a bug in os-brick 4.3.0 14:13:03 bugfixes must be proposed to master and backported to stable/wallaby 14:13:08 adding to it, we've similar bumps in all stable releases 14:13:43 so os-brick master is now xena development 14:13:46 welcome to the future 14:14:10 :-) 14:14:23 ok, this week is cinderclient release and the feature freeze 14:14:25 :) 14:14:29 also requirements freeze 14:14:52 Where does the time go? 14:14:57 no kidding 14:15:04 ok, as far as feature freeze goes 14:15:16 we are working from the blueprints in launchpad for wallaby 14:15:46 #link https://blueprints.launchpad.net/cinder/wallaby 14:16:24 so, if you have a feature listed in there, it will be really helpful if you make sure links to your patches are in the "Whiteboard" section of you BP 14:16:42 *your BP 14:17:55 except for the cinderclient patches we are about to discuss, those are the review top priority for the next few days 14:18:04 #topic cinderclient release 14:18:23 i have made a handy etherpad 14:18:26 #link https://etherpad.opendev.org/p/wallaby-cinderclient 14:18:56 the brick extension just has one open easy maintenance patch 14:19:28 cinderclient has a bunch of maintenance patches 14:19:34 should be quick reviews 14:20:51 important patch is the version bump 14:21:12 i will reach out to cores personally once the release notes and requirements patches are ready 14:21:32 looking at the open reviews 14:21:35 #link https://review.opendev.org/q/project:openstack/python-cinderclient+status:open 14:21:56 i don't see anything else other than maybe the client cert patch 14:22:04 but i haven't looked at/thought about it 14:22:28 any opinions? 14:22:37 it seems kind of risky to slip in at the last minute to me 14:23:57 that is my thought 14:24:15 we will push that one to xena 14:24:18 presumably work would be happening in other clients/services for mtls too if this is of interest, we should see what 14:24:29 rosmaita: ++ Good call. 14:24:35 if it turns out to be important, we can do a very early xena release that will be compatible with wallaby 14:24:41 I started looking at it yesterday and had the same thought. 14:25:06 ok, thanks 14:25:27 you will notice the absence of a v2-support-removal patch 14:25:52 i am in the middle of one, but i am not going to try to rush it in 14:26:16 i don't think we'll be removing v2 from cinder this cycle either 14:26:26 but, i think it should be top priority for xena 14:26:39 both client and cinder 14:26:49 we will organize it at the PTG 14:27:17 last item 14:27:42 looks like i need to put together an etherpad of the cinder feature reviews 14:27:47 i will do that after the meeting 14:28:00 asking people to go to launchpad and then find the reviews doesn't seem to be working 14:28:22 i will send the link to the ML and #openstack-cinder when it's ready 14:28:39 only items on that list will be eligible for feature freeze 14:28:55 so watch for it and check to make sure your reviews are on it 14:29:12 rosmaita: one ques, are driver features also included in the BPs? 14:29:23 whoami-rajat: yes, they are supposed to be 14:29:39 that's the "driver features declaration" deadline on the release schedule 14:29:40 then i think zadara has one feature that they haven't created a bp for 14:29:42 https://review.opendev.org/c/openstack/cinder/+/774463 14:30:15 i will have to check, i thought there was a bp for that 14:30:32 then i might've missed it, my bad 14:30:38 or i did 14:31:13 but we've discussed that patch at a previous meeting, so i think it should be included as a wallaby feature 14:31:30 i don't find anything with zadara here https://blueprints.launchpad.net/cinder/wallaby 14:32:59 ok, i will fix that 14:33:20 thanks 14:33:31 one note to people with driver patches 14:33:38 some of the CIs are not responding automatically 14:34:04 so please check (a) your CI, and (b) run it on your patch if you have to 14:34:25 a lot of people prioritize reviews by whether the patch is passing its own CI 14:34:41 so make sure the info is on the review for everyone to see 14:34:43 also 14:35:02 please follow the release note format so we don't waste cycles re-submitting 14:35:28 #link https://docs.openstack.org/cinder/latest/contributor/releasenotes.html 14:35:29 ack 14:35:33 thanks! 14:35:54 ok, that's all from me 14:36:10 #topic Cinder backup - upload chunk in parallel (ThreadPoolExecutor)? 14:36:20 heh 14:36:22 i think this was left over from last week, not sure who's item it is 14:36:38 is there a review for it? 14:37:10 not sure, but i have one for you hemna: https://review.opendev.org/c/openstack/rbd-iscsi-client/+/774748 14:37:25 ok, we will postpone this topic 14:37:30 ok cool thanks. 14:37:40 https://review.opendev.org/c/openstack/cinder/+/779233 i think 14:37:52 we have been seeing some perf issues with chunked backup driver and are looking into parallelism as well 14:38:09 2Tb volume backups take like 20+ hours :( 14:38:31 :-( 14:38:56 does parallelism fix that? 14:39:13 yah, I think we are cpu bound 14:39:16 the patch looks good but we need to be careful, it could affect several drivers 14:39:46 ok, then i don't want to rush it in before FF 14:39:52 if nothing else that patch needs an option added for # of threads IMO, also needs careful review 14:39:53 let's push it to Xena 14:40:02 nah, better be safe 14:40:11 eharney: ++ 14:40:28 ++ 14:40:43 iirc, swift will throttle writes to a single container, so it can be counterproductive if you're not careful 14:40:47 ok, moving along 14:40:50 #topic random issues with tgt, breakages in some jobs, proposal to switch the defaultĀ  backend to lioadm 14:40:55 tosky: that's you 14:41:21 * lyarwood is also here \o 14:41:21 I'm merely the messenger; lyarwood could't attend, but you can see the report in the etherpad 14:41:24 oh 14:41:26 * tosky hides 14:41:32 ok, thanks 14:41:38 hey sorry childcare means I wasn't sure if I'd make it 14:41:39 everyone look at the etherpad 14:41:44 oh, wait 14:41:48 ok lyarwood you have the floor 14:41:52 I was going to say "the proposal is to switch the default to lioadm in ubuntu too" 14:42:01 thanks, so yeah lots of detail in the pad 14:42:20 but tosky is correct that's my current proposal given the failures I've seen with encrypted volume tests 14:42:42 I'm still unable to reproduce issues we've seen in the nova-live-migration job when detaching volumes that I think is related to this 14:43:48 using lioadm on Bionic doesn't appear possible so we will need to pin the grenade jobs to tgtadm 14:44:09 tosky: to reply to your point in the pad AFAICT the grenade job that failed was using Bionic 14:45:17 i think lio should work on bionic 14:45:38 at least, not sure why it wouldn't 14:45:49 I assumed so but python3-rtslib-fb fails to install with `Failed to start rtslib-fb-targetctl.service: Unit rtslib-fb-targetctl.service is not loaded` 14:46:45 huh, i'll leave figuring out that for later 14:47:02 anyway appreciate there's lots of context to load here so I'm just looking for any reason not to do this after M3 14:47:39 lyarwood: uhm, that's strange, as victoria is focal already (it was one of the goals), so the starting point for victoria should be focal 14:47:43 lyarwood: I will ask gmann 14:48:20 tosky: Yeah, I thought we had moved to focal. 14:48:22 well, lio should in general be a better solution, but i am concerned that our lio-barbican gate job fails a lot, possibly due to a bug in the target code 14:48:24 i am worried about doing this as we approach RC time 14:48:40 rosmaita+1 14:48:43 so if we start running it more places we may have to get a lot more engaged on figuring out what is going on there than we have been 14:48:56 so the flip side is that we have a huge uptick in volume detach failures across the gate 14:48:58 yes, if our lio-barbican job is a guide, lio is not going to increase stability 14:49:22 kk I wasn't aware of the lioadm job being unstable 14:49:30 but to finish my point 14:49:56 if I can reproduce the detach issue and show it's related to the use of tgtadm then I think the move to lioadm would still be valid ahead of rc 14:50:34 the encrypted volume failure I've reproduced thus far hasn't caused anywhere near as many failures 14:51:12 and regardless of the decision, https://review.opendev.org/c/openstack/cinder-tempest-plugin/+/779697 makes sense anyway on its own 14:51:35 yeah correct 14:52:01 ok, well looking at the build history, mabye that job isn't as awful as experience indicates 14:52:07 https://zuul.opendev.org/t/openstack/builds?job_name=cinder-tempest-plugin-lvm-lio-barbican&project=openstack%2Fcinder 14:52:40 ok, let's revisit this next week 14:52:42 thanks lyarwood 14:52:52 That still looks like quite a few failures. 14:52:54 ack thanks rosmaita 14:53:00 #topic bug report future 14:53:03 enriquetaso: that's you 14:53:07 o/ 14:53:15 In the last weekly meetings we were not able to cover all the topics proposed on the agenda. For this reason and considering the productivity of discussing the bugs of the week with the team, I would like to propose an additional mini meeting to this one. The new meeting would begin exactly after this one, it would last half an hour and it would be on the specific channel of cinder. 14:53:27 enriquetaso: ++ 14:53:32 I have not sent an email to the mailing list yet, but if it's okay with you, I'll do it this week. 14:53:39 \o/ 14:53:46 do we want to start today? 14:53:50 yes 14:53:56 works for me 14:54:06 any objections? 14:54:10 Makes sense. 14:54:10 nope 14:54:16 Since we often use this whole time. 14:54:25 sounds good 14:54:32 great, let's move along then. thanks, sofia 14:54:40 #topic stable branch updates 14:54:47 whoami-rajat: you have the floor 14:54:51 Hi 14:55:02 as mentioned in the etherpad, victoria is released \o/ 14:55:11 ussuri release patch is up 14:55:30 \o/ 14:55:31 but we still have unmerged patches in train, specifically cinder (i guess 7 patches?) and 1 in os-brick 14:56:12 ok, i think we will have to postpone train release until later next week 14:56:21 i will update the tracker with all unmerged patches but for time being please review https://review.opendev.org/#/q/project:openstack/cinder+status:open+branch:stable/train 14:56:30 need all reviewers focused on cinderclient today and features for the rest of the week 14:56:42 rosmaita: yep, makes sense 14:56:53 ok, thanks for organizing this 14:57:04 ++ 14:57:04 np 14:57:15 #topic open discussion 14:57:25 reminder: the cinder meeting is in UTC 14:57:44 so for some people in north america, the local time will be different next week 14:57:59 and europe goes to summer time at the end of the month 14:58:16 so if you show up next week and no one is there, check the time 14:58:21 https://review.opendev.org/c/openstack/cinder/+/777470 could use another +2, have had folks waiting on this fix for a bit 14:58:22 :) 14:58:33 Oh, does DST happen this weekend? 14:58:48 yes afaik 14:59:13 does anyone understand the policies code? 14:59:25 rosmaita: Thanks for letting me know. 14:59:27 depends on what you mean by "understand" 14:59:29 i think some people have been doing some work on policies lately 14:59:41 heh 14:59:53 so, I'm looking into adding https://review.opendev.org/c/openstack/cinder/+/779541 15:00:02 which allows admins to see all the volume_admin_metadata 15:01:03 just noticed we are out of time 15:01:12 bug meeting in cinder channel now 15:01:13 is there a way to add that policy but make it optionally work, so that the current default of not showing all admin_metadata can stay in place 15:01:18 #endmeeting