*** dviroel|out is now known as dviroel | 11:32 | |
rosmaita | #startmeeting cinder | 14:01 |
---|---|---|
opendevmeet | Meeting started Wed Aug 11 14:01:27 2021 UTC and is due to finish in 60 minutes. The chair is rosmaita. Information about MeetBot at http://wiki.debian.org/MeetBot. | 14:01 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 14:01 |
opendevmeet | The meeting name has been set to 'cinder' | 14:01 |
rosmaita | #topic roll call | 14:01 |
simondodsley | hi | 14:01 |
sfernand | hi | 14:01 |
walshh_ | hi | 14:01 |
fabiooliveira | hi | 14:01 |
eharney | hi | 14:01 |
TusharTgite | hi | 14:01 |
shoffmann | Hi | 14:01 |
enriquetaso | hi | 14:02 |
tosky | hi | 14:02 |
rosmaita | good turnout | 14:03 |
rosmaita | #link https://etherpad.opendev.org/p/cinder-xena-meetings | 14:03 |
rosmaita | ok, let's get started | 14:03 |
rosmaita | #topic announcements | 14:03 |
rosmaita | os-brick Xena release next week | 14:04 |
rosmaita | 2 things happening in os-brick, both around the nvmeof | 14:04 |
rosmaita | agent | 14:04 |
rosmaita | #link https://review.opendev.org/c/openstack/os-brick/+/802691 | 14:04 |
rosmaita | nvmeof multiple volume support | 14:04 |
rosmaita | #link https://review.opendev.org/c/openstack/os-brick/+/800014 | 14:04 |
rosmaita | those are the highest priority reviews right now | 14:05 |
rosmaita | so, please review accordingly | 14:05 |
rosmaita | also, would be good if each of the teams working on the two patches mentioned reviewed each other's work | 14:05 |
rosmaita | and, of course, it would be good if cinder people in general review the two mentioned patches | 14:06 |
rosmaita | i think that's it for announcements | 14:07 |
eharney | 800014 shows merge conflict, fyi | 14:07 |
rosmaita | yeah, i noticed that | 14:07 |
rosmaita | pro tip for driver developers: merge conflicts are a real turn-off | 14:07 |
e0ne | :) | 14:07 |
rosmaita | important to get those resolved quickly or people won't review | 14:08 |
rosmaita | ok, that's all for announcements, i think | 14:08 |
rosmaita | #topic How to solve retype error with nas_secure_file_operations | 14:09 |
rosmaita | shoffmann: that's you | 14:09 |
shoffmann | Hi, thanks. | 14:09 |
shoffmann | We saw the issue, that retype of available volumes fail when using nfs share, if nas_secure options are used. | 14:09 |
shoffmann | means, root is not allowed to write to that share. | 14:10 |
shoffmann | #link https://bugs.launchpad.net/cinder/+bug/1938196 | 14:10 |
shoffmann | My question is, how we can solve this in a nice way. | 14:10 |
eharney | my -2 was against adding a new config option, so i think that no longer applies | 14:11 |
shoffmann | use the existing nas_secure_file_operations option but also in volume/manage.py (where it is currently not defined for all backends) or introduce a new option. | 14:11 |
shoffmann | Maybe there are also other ideas. | 14:11 |
shoffmann | #link https://review.opendev.org/c/openstack/cinder/+/802882 | 14:11 |
shoffmann | Right now, i check, if the option is defined, but maybe this is a bit ugly, because the nas_secure option is related to some backend types only. | 14:12 |
eharney | i'm not sure that the current patch will work in all scenarios | 14:12 |
eharney | IIRC root access for _qemu_img_info is needed when volumes are attached, regardless of the nas_secure options | 14:13 |
eharney | (because they may be owned by the qemu user via nova) | 14:13 |
shoffmann | In our setup, we configured nova to not change the owner of the file. Not sure, if it is possible, to enable nas_secure on NFS share and let nova change the owner. | 14:14 |
shoffmann | Because at NetApp only one user and one group can be allowed. | 14:14 |
shoffmann | So we use cinder:nova | 14:14 |
eharney | this code is in the generic nfs driver | 14:14 |
eharney | years ago i think we generally felt that the nas_secure options weren't fully implemented correctly, do we know if there are other fixes needed like this to successfully use nas_secure? | 14:15 |
shoffmann | Till now we didn't saw other issues. But maybe others know more. | 14:16 |
shoffmann | Is it possible for generic/other nfs drivers to define nas_secure with many allowed users? | 14:17 |
enriquetaso | I haven't seen any other bug related to nas_secure, but I'll check | 14:17 |
eharney | enriquetaso: there have been many | 14:17 |
eharney | yes, i think generic nfs configurations commonly do that | 14:18 |
eharney | these changes are also risky because there's a matrix of configuration options (two nas_secure* options), the generic nfs driver, netapp nfs drivers, whether you configure root_squash, etc.... it's hard to know that a change like this won't break something (which i think it does) | 14:18 |
rosmaita | what was the new option (that eric doesn't like)? | 14:19 |
shoffmann | Something like "secure_file_operation" or so. | 14:20 |
shoffmann | Adding a NetApp specific option also sounds bad. | 14:20 |
rosmaita | what was it going to do that you couldn't do with the current options? | 14:20 |
eharney | the purpose of the new option overlapped with the existing option | 14:20 |
rosmaita | and yea, we want to avoid vendor-specific options | 14:21 |
enriquetaso | On the reported bug side: i think i can start tagging all this bugs with 'Tags: nas_secure' or something like that to have them all together | 14:21 |
rosmaita | enriquetaso: ++ | 14:22 |
eharney | i wonder if the easier fix for the qemu_img_info case wouldn't be to just attempt running it both as root and non-root, or maybe look at the files being inspected to decide which to do | 14:22 |
eharney | rather than trying to tie it to config options | 14:22 |
shoffmann | Are there use cases, using the nas_secure_* but allow root to write to the nfs share? | 14:23 |
shoffmann | eharney I also can implement this | 14:23 |
shoffmann | same for _connect_device in volume/manager.py? | 14:23 |
eharney | we may need a list of use cases for all 4 combinations of the existing nas_secure options, i'm not sure if we even know that they are all valid combinations | 14:25 |
rosmaita | and what's the deal with 'auto'? seems like we would always be using True for both on fresh installs | 14:27 |
eharney | whether auto ends up as true or false depends mostly on how the permissions on the share are configured | 14:28 |
rosmaita | oh, ok | 14:28 |
eharney | we basically let deployers decide if they want to have NFS volume files owned as root or as a different user | 14:28 |
rosmaita | so that's what you were saying about inspect the files to decide what to do | 14:29 |
eharney | which is the decision that the nas_secure_file_permissions option is about | 14:29 |
eharney | the nas_secure_file_operations option is about whether to run commands as root via sudo or just as the cinder user | 14:29 |
eharney | yes | 14:29 |
rosmaita | slightly off topic, we have a devstack-plugin-nfs-tempest-full zuul job ... i wonder how hard it would be to reconfigure it to test this change | 14:31 |
eharney | we could do that, but we'd also need to examine what coverage we get from tempest tests | 14:31 |
eharney | should not be hard to do, though | 14:31 |
rosmaita | good point about what exactly tempest is checking | 14:32 |
rosmaita | but it looks like we will really need a "live" test of this change to see what happens | 14:33 |
eharney | yeah, there are a lot of details about file permissions with volumes that are attached, migrating, etc | 14:33 |
shoffmann | than I will try to implement checking file permissions. If I should add/change tests to tempest, please tell me (e.g. at the change or bug) | 14:33 |
shoffmann | Or should I wait for what we will do with all the nfs share issues? | 14:34 |
rosmaita | i think the file checking permissions is a good start | 14:35 |
rosmaita | but it would be helpful if you think about the nfs share issues in general, especially because you work a lot with nfs | 14:35 |
rosmaita | anybody want to volunteer to see what tempest and/or cinder-tempest-plugin test that will be relevant for this change? | 14:36 |
rosmaita | shoffmann: could be helpful if you could list some scenarios where this bug surfaces as a start | 14:37 |
shoffmann | sure, I can help | 14:37 |
shoffmann | What is a good place for this? mailing list or next meeting? | 14:37 |
rosmaita | shoffmann: i think put some scenarios into an etherpad and then send a note to the mailing list | 14:38 |
*** liuyulong_ is now known as liuyulong | 14:38 | |
enriquetaso | i need to get familiar with this, i'd like to group all the related bugs first and list the scenarios we are currently having, but I can help with the tempest coverage as well :) | 14:38 |
enriquetaso | rosmaita++ | 14:38 |
enriquetaso | sounds better | 14:38 |
rosmaita | but shoffmann we should continue to follow this at the cinder meetings too | 14:38 |
rosmaita | (it's not an either/or is what i mean) | 14:39 |
rosmaita | ok, thanks, let's move on | 14:39 |
sfernand | enriquetaso yes me too, but I can check the nas_secure option with the netapp nfs driver | 14:39 |
eharney | fyi all of this interacts w/ cinder-backup too | 14:39 |
rosmaita | eharney: good point! we don't want to break backups | 14:39 |
rosmaita | shoffmann: ^^ | 14:39 |
rosmaita | #topic Cinder matrix - Clarification on Snapshot Attachment | 14:40 |
rosmaita | walshh_: that's you | 14:40 |
walshh_ | Apologies for bringing this one up again. Differing opinions | 14:40 |
rosmaita | #link https://review.opendev.org/c/openstack/cinder/+/800015 | 14:40 |
walshh_ | This functionality is exposed via rpcapi (as a cinder backup enhancement) and not cinder API so maybe it should not be in the matrix at all. | 14:40 |
walshh_ | Only a couple of drivers can attach snapshots, most cannot. | 14:41 |
walshh_ | Just wondering is removing it might be the best option as it is certainly causing confusion with customers | 14:41 |
rosmaita | there was some confusion about this on the mailing list, too | 14:41 |
walshh_ | and confusing us also :-) | 14:42 |
rosmaita | i think you may be right about removing it from the matrix | 14:42 |
eharney | i think that makes sense, since it isn't actually a feature that is exposed to users | 14:42 |
simondodsley | If this is only for cinder-backup use then I will remove my +1 and agree that it be removed completely rom the matrix | 14:42 |
walshh_ | And customers do not like to see x's against features | 14:42 |
rosmaita | maybe move it over somewhere in the driver development docs? so people at least know to think about it | 14:42 |
walshh_ | even if it is not possible to implement in a driver | 14:42 |
rosmaita | where by 'people' i mean driver developers | 14:42 |
walshh_ | sure I can do that if I am pointed in the correct direction | 14:43 |
rosmaita | ok, why don't you go ahead and revise your patch to just remove it, and if someone finds a good place to add a statement, they can tell you on the review | 14:44 |
walshh_ | so if there is no other strong opinions for it to remain, I will remove it | 14:44 |
rosmaita | sounds good ... if anyone has 2nd thoughts, leave comments on the review | 14:44 |
rosmaita | thanks helen | 14:44 |
walshh_ | thanks all | 14:44 |
rosmaita | #topic RBD fixes that we really need | 14:44 |
rosmaita | enriquetaso: you're up | 14:44 |
enriquetaso | I'd like to point out these two patches here that I think are ready for review. (1) Currently the Ceph backend fails when restoring non ceph volumes. | 14:45 |
enriquetaso | #link https://review.opendev.org/c/openstack/cinder/+/750782 | 14:45 |
enriquetaso | and.. | 14:45 |
enriquetaso | (2) When creating an encrypted volume from an encrypted image the volume gets truncated and it's inaccessible after the operation. | 14:45 |
rosmaita | that is a bad one | 14:45 |
enriquetaso | #link https://review.opendev.org/c/openstack/cinder/+/801522 | 14:45 |
enriquetaso | yes, i think rbd really needs both asap | 14:46 |
rosmaita | ok, reminder that RBD is a community driver, so it deserves extra love | 14:46 |
enriquetaso | :) | 14:46 |
enriquetaso | thanks! | 14:46 |
rosmaita | thanks sofia | 14:46 |
rosmaita | #topic Fix for PowerVC | 14:46 |
rosmaita | simondodsley: you have the floor | 14:46 |
simondodsley | Thanks - This is an XS patch that we need to merge so we can create an urgent backport to Wallaby. PowerVC will use Wallaby for its next release. PowerVC uses `provider_id` heavily and so we need it to be correctly associated with all volumes in consistency groups. | 14:46 |
simondodsley | https://review.opendev.org/c/openstack/cinder/+/803046 | 14:47 |
rosmaita | ok, reminder to all driver maintainers -- it is helpful to review other driver patches | 14:47 |
simondodsley | I try to :) | 14:48 |
rosmaita | (hint to anyone asking in #openstack-cinder for reviews) | 14:48 |
rosmaita | simondodsley: that wasn't aimed at you, i have noticed your name showing up on all sorts of revieww | 14:48 |
rosmaita | thanks simone | 14:48 |
rosmaita | sorry about the type | 14:49 |
rosmaita | typo | 14:49 |
simondodsley | lol | 14:49 |
rosmaita | #topic mypy inline comments | 14:49 |
rosmaita | i just saw these on https://review.opendev.org/c/openstack/cinder/+/802882/7/cinder/volume/manager.py | 14:49 |
rosmaita | iirc, pylint did this too for a while but we turned it off somehow | 14:49 |
rosmaita | just want to see what people's opinions are about what we as a team want here | 14:50 |
rosmaita | inline comments are certainly in your face and motivate fixing things | 14:50 |
rosmaita | so maybe leave them? | 14:51 |
rosmaita | ok, not hearing an outburst of opinions here, let's leave them for now | 14:51 |
eharney | fwiw, the comments on https://review.opendev.org/c/openstack/cinder/+/802882/7/cinder/volume/manager.py look kind of odd | 14:52 |
rosmaita | although, let's review and merge this patch that fixes the particular problem | 14:52 |
eharney | are they on the right lines? | 14:52 |
rosmaita | #link https://review.opendev.org/c/openstack/cinder/+/784453 | 14:52 |
rosmaita | i don't know, looking now | 14:52 |
Roamer` | personal opinion from somebody whose colleagues can sometimes get tired of "please also run this check before pushing to Gerrit": I like advice about improvements, and I like to keep my code as clean as possible, but then I suppose that's common among people gathered here :) | 14:52 |
eharney | the comments also seem to be reported twice | 14:53 |
rosmaita | eharney: line 980 doesn't seem to make sense, and the duplication is not nice | 14:54 |
eharney | i guess they're appeared twice because there was a recheck and the job ran twice | 14:54 |
Roamer` | but yeah, in this particular case the line numbers in https://zuul.opendev.org/t/openstack/build/8b0af55849674691ac9b5c032ac2ec81 and the quoted "return True" statements do not seem to correspond to what is shown in Gerrit | 14:55 |
rosmaita | just as a side note, there's a patch listed in https://tiny.cc/CinderPriorities where mypy would have helped avert a bug | 14:55 |
rosmaita | so i encourage everyone to continue to review eharney's patches | 14:55 |
Roamer` | is there any chance mypy was running on the wrong files?! | 14:55 |
eharney | Roamer`: this could be an artifact of zuul doing a merge before the mypy run, need to check the logs for that | 14:55 |
Roamer` | eharney, oh, of course | 14:56 |
Roamer` | eharney, yeah, don't know why I didn't realize that when I fight with Zuul merges in our CI weekly :) | 14:56 |
Roamer` | hm, so this means that the line numbers will *almost certainly* be wrong | 14:56 |
rosmaita | that would seem to indicate that inline comments won't be helpful | 14:57 |
Roamer` | right | 14:57 |
eharney | i can't remember if zuul does that in the check queue or just the gate queue | 14:57 |
Roamer` | eharney, I think it has to do it in the check queue, too... I don't see how things would work without that (it is quite often that the Gerrit change is not made against the tip of the branch) | 14:58 |
Roamer` | (at least it does in our CI) | 14:58 |
eharney | i think it does: https://zuul.opendev.org/t/openstack/build/8b0af55849674691ac9b5c032ac2ec81/log/job-output.txt#334 | 14:58 |
eharney | because it's running on a merge commit of 802882 and not the commit itself... so yeah i'm not sure how the line number thing would actually work | 14:59 |
Roamer` | yeah, if the merge is kinda-sorta-trivial, then one could think of writing some tool that parses the Python AST and figures out the line number deltas, but even that could go wrong | 15:00 |
rosmaita | ok, looks like we should turn off the inline comments | 15:00 |
rosmaita | i will follow up on that | 15:00 |
rosmaita | all right, we are out of time | 15:01 |
rosmaita | everybody: os-brick reviews!!! | 15:01 |
Roamer` | BTW something that I didn't add to the etherpad (I thought it was too trivial, maybe I should have)..... what are the chances for a StorPool trivial performance improvement - implement revert-to-snapshot natively? https://review.opendev.org/c/openstack/cinder/+/680889 | 15:01 |
Roamer` | (and yes, I realize that I should do more reviews myself, sorry about that) | 15:01 |
rosmaita | :) | 15:01 |
rosmaita | seems ok ... file a launchpad BP because we are coming up on M-3 feature freeze soon | 15:02 |
rosmaita | that will help us prioritize | 15:02 |
rosmaita | thanks everyone! | 15:02 |
Roamer` | I did, it's at https://blueprints.launchpad.net/openstack/?searchtext=storpool-revert-to-snapshot | 15:02 |
Roamer` | but I only did it the day before :) | 15:02 |
Roamer` | and thanks | 15:02 |
rosmaita | ok, thanks i will make sure it's in the xena mix | 15:03 |
rosmaita | #endmeeting | 15:03 |
opendevmeet | Meeting ended Wed Aug 11 15:03:08 2021 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:03 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/cinder/2021/cinder.2021-08-11-14.01.html | 15:03 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/cinder/2021/cinder.2021-08-11-14.01.txt | 15:03 |
opendevmeet | Log: https://meetings.opendev.org/meetings/cinder/2021/cinder.2021-08-11-14.01.log.html | 15:03 |
*** dviroel is now known as dviroel|away | 16:19 | |
*** dviroel|away is now known as dviroel | 19:09 | |
*** dviroel is now known as dviroel|ruck | 20:35 | |
*** dviroel|ruck is now known as dviroel|ruck|out | 21:46 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!