*** __ministry is now known as Guest5198 | 01:39 | |
opendevreview | Takashi Kajinami proposed openstack/nova master: doc: Use dnf instead of yum https://review.opendev.org/c/openstack/nova/+/938496 | 01:39 |
---|---|---|
opendevreview | huanhongda proposed openstack/nova master: Rollback live migration when virt driver failed https://review.opendev.org/c/openstack/nova/+/938154 | 11:21 |
dansmith | artom: gibi sean-k-mooney melwitt do you want to jump on a high-bandwidth chat today about the tpm stuff? | 14:43 |
artom | Was just about to say... | 14:44 |
sean-k-mooney | am sure i can whenever | 14:44 |
sean-k-mooney | im almost doen my first pass on the vfio-variant driver spec | 14:44 |
sean-k-mooney | (the first one) | 14:44 |
dansmith | I'm pretty open today | 14:44 |
artom | The ACL vs make the Barbican secret owned by the Nova user thing is kind of a red herring, in the sense that it's an implementation detail that achieves the exact same thing (mostly): the Nova service user can read the Barbican secret. What we're really debating is - do we give access to the Nova user unconditionally, or do we make it opt-in for the user. | 14:45 |
artom | For the human user. | 14:46 |
sean-k-mooney | well there is that topci | 14:46 |
sean-k-mooney | but also shoudl we change how the vtpm secret is stored in libvirt so we can read it back | 14:46 |
dansmith | well, or if we use libvirt for it | 14:47 |
dansmith | yeah | 14:47 |
artom | That would make it a non-starter for rolling upgrades, so I thought we'd dropped that idea | 14:47 |
sean-k-mooney | and i think we shoudl check if libvirt can already copy it if its not marked as private for what its worth | 14:47 |
dansmith | and one thing we could discuss is how much of the implementation is the same (i.e. the copying of a secret from source, vs. the dest fetching it) on each thing | 14:47 |
dansmith | sean-k-mooney: I thought the spec said that wasn't implemented yet? | 14:47 |
sean-k-mooney | artom: im not conviced we shoudl be upgrading new vms to supprot this without requireing them to be cold migrated or hard rebooted or somethihng like that | 14:47 |
sean-k-mooney | dansmith: its definaly not supproted for private secrtes | 14:48 |
sean-k-mooney | but the docs for this is slightly odd so i would like to check what happens if we dont make them private | 14:48 |
artom | sean-k-mooney, OK, we definitely need a call then :P | 14:48 |
dansmith | artom: no, I'm not saying we should require the upgrade path, remember, I just said your reason didn't sound good enough to exclude it.. but if we exclude because of a better reason it's fine | 14:48 |
artom | Oh right. | 14:49 |
sean-k-mooney | fyi "The value of the secret must not be revealed to any caller of libvirt, nor to any other node." the bit after the comma makes me wonder what happens for non private secrets | 14:49 |
dansmith | sean-k-mooney: that sounds like it never tells anyone | 14:50 |
sean-k-mooney | correfct if its private | 14:50 |
sean-k-mooney | but the defualt is private=no | 14:50 |
sean-k-mooney | we epxlity mark only vtpm secreate as private=yes | 14:50 |
sean-k-mooney | i asked interanly on the virt channle late last night but no one responed | 14:51 |
sean-k-mooney | artom: how much work would it be for you to push up a DNM patch ot diabel the api block and test live migration with vtpm and jsut remove setting private=yes? | 14:52 |
sean-k-mooney | do you have a simialr patch already? | 14:53 |
artom | Not too much, I think? | 14:53 |
artom | I can piggy back off https://review.opendev.org/c/openstack/nova/+/925771 | 14:53 |
artom | Wait, no, I explicitly _don't_ want to piggy off that, because it has all the fetch-from-barbican-define-in-libvirt-on-the-dest mechanics | 14:54 |
sean-k-mooney | well you could jsut comment out https://review.opendev.org/c/openstack/nova/+/925771/6/nova/virt/libvirt/driver.py#11512 | 14:54 |
artom | And we want to test do-nothing-except-make-the-secret-public-and-see-if-it-magically-works | 14:54 |
sean-k-mooney | where you are adding the download | 14:54 |
sean-k-mooney | yes exactly | 14:54 |
sean-k-mooney | oh actully no there is one other change | 14:55 |
sean-k-mooney | you need to also stop nova deleteing the vtpm secrete after the vm boots | 14:55 |
gibi | dansmith: artom: sure I can jump on a call | 14:55 |
sean-k-mooney | so make it public and dont delete it | 14:55 |
gibi | I just finished with the SRIOV vGPU call | 14:55 |
dansmith | vkw-ozjo-pap | 14:56 |
dansmith | meet.google.com/vkw-ozjo-pap | 14:56 |
sean-k-mooney | im going to grab a drink but ill join shortly | 14:57 |
artom | Coming, just on a call to the mechanic where my car is | 14:58 |
artom | 1.5 hours away from Montreal :( | 14:58 |
bauzas | #startmeeting nova | 16:01 |
opendevmeet | Meeting started Tue Jan 7 16:01:18 2025 UTC and is due to finish in 60 minutes. The chair is bauzas. Information about MeetBot at http://wiki.debian.org/MeetBot. | 16:01 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 16:01 |
opendevmeet | The meeting name has been set to 'nova' | 16:01 |
bauzas | hi folks | 16:01 |
bauzas | who's around ? | 16:01 |
sean-k-mooney | o/ | 16:01 |
gibi | /o (partially) | 16:01 |
elodilles | o/ | 16:01 |
bauzas | (I'm partially here too but I need to run the meeting :) ) | 16:02 |
* bauzas likes doing three things at same time | 16:02 | |
bauzas | okay let's start and hopefully it will be quick | 16:03 |
bauzas | #topic Bugs (stuck/critical) | 16:03 |
bauzas | #info No Critical bug | 16:04 |
bauzas | #info Add yourself in the team bug roster if you want to help https://etherpad.opendev.org/p/nova-bug-triage-roster | 16:04 |
bauzas | any bugs people wanna raise ? | 16:04 |
Uggla | o/ | 16:04 |
fwiesel | o/ | 16:04 |
bauzas | looks not, moving on | 16:05 |
bauzas | #topic Gate status | 16:05 |
bauzas | #link https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure Nova gate bugs | 16:05 |
bauzas | #link https://etherpad.opendev.org/p/nova-ci-failures-minimal | 16:05 |
bauzas | #link https://zuul.openstack.org/builds?project=openstack%2Fnova&project=openstack%2Fplacement&branch=stable%2F*&branch=master&pipeline=periodic-weekly&skip=0 Nova&Placement periodic jobs status | 16:05 |
bauzas | nova-emulation is in the weeds | 16:05 |
bauzas | #info Please look at the gate failures and file a bug report with the gate-failure tag. | 16:06 |
bauzas | #info Please try to provide meaningful comment when you recheck | 16:06 |
bauzas | #topic Release Planning | 16:07 |
bauzas | a few things to mention here | 16:07 |
bauzas | #link https://releases.openstack.org/epoxy/schedule.html | 16:08 |
bauzas | #info Nova deadlines are set in the above schedule | 16:08 |
bauzas | #info Implemention review day is planned tomorrow | 16:08 |
bauzas | I'll send an email about it ^ | 16:08 |
bauzas | #action bauzas to notify about review day thru email | 16:08 |
bauzas | the other, also important : | 16:09 |
bauzas | #info Specs approval freeze planned for Thursday EOB | 16:09 |
bauzas | you are warned | 16:09 |
bauzas | I'm mostly burned today by spec reviews and I'll continue tomorrow | 16:09 |
bauzas | anything worth mentioning now ? | 16:09 |
s3rj1k | late hey to all, connection issues | 16:10 |
bauzas | moving on | 16:10 |
bauzas | #topic Review priorities | 16:11 |
bauzas | #link https://etherpad.opendev.org/p/nova-2025.1-status | 16:12 |
bauzas | nothing to mention, continuing | 16:12 |
bauzas | #topic Stable Branches | 16:12 |
bauzas | elodilles: happy new year | 16:12 |
elodilles | :) | 16:12 |
elodilles | happy new year too o/ | 16:12 |
elodilles | :) | 16:12 |
elodilles | speaking of that | 16:12 |
elodilles | i see not much activity on stable branches in the past weeks | 16:13 |
elodilles | but i'm not aware of any stable gate issue | 16:13 |
elodilles | (maybe because of the above reason o:)) | 16:14 |
elodilles | #info stable branch status / gate failures tracking etherpad: https://etherpad.opendev.org/p/nova-stable-branch-ci | 16:14 |
elodilles | please if you see any issue, add there ^^^ | 16:14 |
elodilles | that's all about stable branches from me | 16:15 |
bauzas | cool | 16:15 |
bauzas | #topic vmwareapi 3rd-party CI efforts Highlights | 16:15 |
bauzas | fwiesel: heya, happy new year too | 16:16 |
fwiesel | Hi, happy new year. No updates from my side. | 16:16 |
bauzas | ++ | 16:16 |
bauzas | #topic Open discussion | 16:16 |
bauzas | one item in the agenda, quite on time given thursday's deadline | 16:16 |
bauzas | (bauzas) Specless approval for https://blueprints.launchpad.net/nova/+spec/image-metadata-props-weigher ? | 16:16 |
bauzas | tl;dr: I'm hereby asking for an exception to write a spec | 16:17 |
bauzas | this is just a weigher, we have everything we already need | 16:17 |
bauzas | in the past, some filters and weighers required a spec, some others not | 16:17 |
bauzas | so I'm leaning towards asking you to grant this blueprint as it is | 16:18 |
gibi | bauzas: fine by me to have it specless | 16:18 |
bauzas | tbc, we have Instance objects in the HostState | 16:22 |
sean-k-mooney | we do | 16:23 |
sean-k-mooney | for affinity/anti affintiy filters | 16:23 |
sean-k-mooney | and a sa result the weigher have them too for the same reason | 16:23 |
bauzas | the weigher design will be 'lookup at each of the instances from the host, see their imagemeta from the instance and compare with the request' | 16:24 |
sean-k-mooney | and the instance objects have teh image metadata avaialable | 16:24 |
sean-k-mooney | the only concern is is that lazy loaded today or not | 16:24 |
bauzas | the instances list ? | 16:24 |
sean-k-mooney | not the instnace list | 16:24 |
sean-k-mooney | the image metadta which is construction form the cached copy in teh instance_system_metadata table | 16:25 |
sean-k-mooney | i dont know if we load that in the host manager | 16:25 |
sean-k-mooney | we proably do just wanted ot call that out | 16:25 |
bauzas | ah good point | 16:25 |
bauzas | we can't load that later | 16:26 |
bauzas | as we're on the scheduler | 16:26 |
bauzas | or we need to target the cell db | 16:26 |
sean-k-mooney | https://github.com/openstack/nova/blob/master/nova/objects/instance.py#L70-L71 | 16:26 |
sean-k-mooney | we are good the instnace system metadta is in teh defalt fields | 16:26 |
sean-k-mooney | so we have that already | 16:26 |
bauzas | excellenbt | 16:27 |
bauzas | thanks for checking | 16:27 |
bauzas | are we then ok with the design ? | 16:27 |
sean-k-mooney | i think so. i capture dmost of my feedback inthe blueprint already | 16:28 |
sean-k-mooney | so im more or less ok with proceedign to the implemenation reivew based on that design | 16:28 |
bauzas | ++ | 16:29 |
bauzas | okay, then let's approve it as specless | 16:29 |
bauzas | #approved https://blueprints.launchpad.net/nova/+spec/image-metadata-props-weigher approved as specless | 16:30 |
bauzas | doh | 16:30 |
bauzas | #agreed https://blueprints.launchpad.net/nova/+spec/image-metadata-props-weigher approved as specless | 16:30 |
bauzas | that was it for me | 16:30 |
bauzas | anything else to mention ? | 16:30 |
sean-k-mooney | if there is time/enerty i have one topic | 16:30 |
bauzas | I don't have energy but we have time | 16:31 |
sean-k-mooney | https://blueprints.launchpad.net/nova/+spec/host-discovery-distributed-lock | 16:31 |
sean-k-mooney | we talked about this at the ptg | 16:31 |
sean-k-mooney | and htere is a related spec https://review.opendev.org/c/openstack/nova-specs/+/936389 | 16:32 |
sean-k-mooney | tldr they woud like to be able to enabel the discover host periodic on multiple schduler at a time | 16:32 |
sean-k-mooney | so i dont actully like the proposal as written | 16:32 |
sean-k-mooney | but i did a short poc of one of the alternitives | 16:32 |
sean-k-mooney | https://review.opendev.org/c/openstack/nova/+/938523 | 16:32 |
sean-k-mooney | i don think this could be a reasonable minor imporvment | 16:33 |
sean-k-mooney | the change is basically just this https://review.opendev.org/c/openstack/nova/+/938523/2/nova/scheduler/manager.py#113 | 16:33 |
sean-k-mooney | what i wanted to know is if we took this supper simpler approch of doing leader election in the perodic and returnign if not the leader | 16:34 |
sean-k-mooney | woudl this be a spec/bluepirnt or bugfix? | 16:34 |
bauzas | hmmmm | 16:35 |
sean-k-mooney | im askign becasue of the time constratint for the first two options | 16:35 |
gibi | it has no API impact so I lean towards specless bp | 16:35 |
s3rj1k | specless from me if that counts :) | 16:35 |
sean-k-mooney | s3rj1k: input is always welcome :) | 16:36 |
gibi | sean-k-mooney: but you can formulate it as a bug as nova allows enabling the periodic in multiple schedulers today | 16:36 |
s3rj1k | (plus it somewhat covered in alternatives section of mentioned above spec) | 16:36 |
gibi | sean-k-mooney: without a safety net | 16:36 |
sean-k-mooney | ya so my perspecitve was. if we are takign the heviry weight approch of adding a distibuted lock manager that needed a spec because its complex and a large change | 16:37 |
bauzas | can't we just add a config option ? | 16:37 |
sean-k-mooney | if we do a very very small change to just od best effored leader election it might be a bug | 16:37 |
bauzas | I understand the reasoning, magically nova will elect a new leader | 16:37 |
sean-k-mooney | bauzas: we could also and a config opiton yes | 16:38 |
bauzas | but I'm afraid this periodic could silently run somewhere else without the op noticing it | 16:38 |
dansmith | omg, DLM? | 16:38 |
* dansmith reads up | 16:38 | |
sean-k-mooney | bauzas: so you have to opt into the perodic | 16:38 |
bauzas | and you know, brainsplits and all the likes happen | 16:38 |
sean-k-mooney | by settign a config option to specify the interval today | 16:38 |
bauzas | that's my point | 16:38 |
sean-k-mooney | so this woudl just be a change to the behaiovr when you opt in | 16:39 |
bauzas | -1 disables the periodic IIRC | 16:39 |
sean-k-mooney | yep and i dont want to change that | 16:39 |
bauzas | but I wouldn't trust python for electing my leader | 16:39 |
bauzas | particularly the sorted command | 16:39 |
sean-k-mooney | so today the db enfoces that bad thing dont happen | 16:39 |
sean-k-mooney | you just get excptions in the log | 16:39 |
sean-k-mooney | if multiple race that is | 16:40 |
sean-k-mooney | which is annoying for operators | 16:40 |
dansmith | if you want to automate host discovery active-active, you can do that yourself with nova-manage and your own DLM right? | 16:40 |
sean-k-mooney | yes | 16:40 |
bauzas | yes, that's why I don't like that approach | 16:41 |
sean-k-mooney | that a pain in k8s however which s3rj1k cares about | 16:41 |
bauzas | you're considering that DLM is just not needed because sorted exists | 16:41 |
gibi | definitely having a full DLM in nova just for this is way overkill | 16:41 |
sean-k-mooney | yep which is why i did no liek the dlm/tooz approch | 16:41 |
bauzas | the SG API isn't also good at doing live healthchecks | 16:41 |
bauzas | there could be a reasonable amount in time where nova wouldn't see a node done | 16:42 |
bauzas | gone* | 16:42 |
sean-k-mooney | which is fine | 16:42 |
gibi | today if you run the periodic in multiple schedulers you burn power unnecessarily but you don't break nova DB just get exceptions. | 16:42 |
sean-k-mooney | we can miss runnign the perodic for a protracted period of time with out bad impacts | 16:42 |
bauzas | we could just let CONF.scheduler.discover_hosts_in_cells_interval be mutable and leave DLMs to manage nova-scheduler A/Ps instead of us | 16:43 |
sean-k-mooney | nope | 16:44 |
dansmith | There are also things we could do in nova that don't require a DLM if we really care | 16:44 |
sean-k-mooney | that does not work for k8s deployments which is the motivating usecase | 16:44 |
dansmith | like the periodic could say "am I the oldest-started nova-scheduler service that is currently up? If so, then run the discovery, if not, then don't" | 16:44 |
sean-k-mooney | dansmith: right im proposing not using a DLM | 16:44 |
dansmith | then only one of them would run it until you shut down the oldest and then the next one would do it.. sort of lazy-consensus election | 16:44 |
sean-k-mooney | dansmith: that basically what my patch does | 16:44 |
gibi | dansmith: exactly, that is very close to what sean-k-mooney proposes | 16:44 |
bauzas | dansmith: I don't like to trust the service group API for electing my leader | 16:44 |
dansmith | oh, then.. yeah :D | 16:45 |
dansmith | bauzas: it's not really trust, it's just optimization | 16:45 |
bauzas | sorted was one option, the oldest is another alternative | 16:45 |
gibi | bauzas: don't think about as leader election, it is more like limiting the number of discover host runs based on an input | 16:45 |
dansmith | I see sean-k-mooney's patch now, sorry I'm catching up | 16:45 |
sean-k-mooney | sorted is just for determinium | 16:45 |
sean-k-mooney | dansmith: no worries | 16:45 |
dansmith | determinism .. yeah, let's do something like that if we care, for sure | 16:46 |
gibi | bauzas: it could be a last run timestamp if we don't like age | 16:46 |
gibi | bauzas: but as we hav age we have a good set of value to find a single scheduler that min / max in that value to be the only one to do the work | 16:46 |
bauzas | well, | 16:46 |
sean-k-mooney | so for today what i really want to knwo is spec, bug or specless bluepritn so i know if we have till thurday or m3 to finalise the design and implamations | 16:47 |
bauzas | there are possibilities where a nova-scheduler could see itself being the oldest while another one, runnning 2 secs later could also see it as the oldest | 16:47 |
sean-k-mooney | that why im not using time | 16:47 |
sean-k-mooney | im sorting on the value of host | 16:48 |
bauzas | I know | 16:48 |
bauzas | but that's still the same | 16:48 |
dansmith | bauzas: sort by id, filter by up, that should be stable | 16:48 |
bauzas | some host could see itself as up while someother too, 1 sec later | 16:48 |
dansmith | if it does, then it's flapping from the view of the operator, which I think is not likely to be unnoticed | 16:48 |
sean-k-mooney | also people run with this in production without this protection or any kind fo arbitation today. | 16:49 |
bauzas | dansmith: if you are okay with operators unnoticing which nova-scheduler runs where, then ok | 16:49 |
sean-k-mooney | i.e. the just ignor the longs when there is a collioson | 16:49 |
dansmith | bauzas: which nova scheduler runs...host discovery? | 16:49 |
bauzas | yes | 16:49 |
sean-k-mooney | why would the care? | 16:50 |
dansmith | isn't that the point of this? instead of run-everywhere it's run-one-place, automatically managed? | 16:50 |
dansmith | right now, they can all run it in parallel all the time, if you care, but that's expensive | 16:50 |
gibi | ^^ yepp | 16:50 |
dansmith | I thought the point was to make it less expensive and automatically decide that (hopefully) only one does it each time.. seems fine to me | 16:50 |
bauzas | OK, then let's go for the proposal | 16:50 |
dansmith | worst case, two do it, no problem | 16:50 |
sean-k-mooney | yep ^ | 16:51 |
bauzas | well, you're right | 16:51 |
bauzas | two running at same time don't split brains | 16:51 |
bauzas | because of the periodic itself | 16:51 |
sean-k-mooney | they dont even cause an error if you have not added any hosts | 16:51 |
dansmith | yeah | 16:51 |
bauzas | okay, then to answer sean-k-mooney's question, I'm fine with it being specless | 16:51 |
bauzas | I actually would prefer it to be specless please | 16:52 |
bauzas | as I wouldn't want to step into DLM muds and leader elections | 16:52 |
sean-k-mooney | ok i think ill file a new bluepritn for ti and update the description | 16:52 |
sean-k-mooney | if we are ok with that are we ok to appove that async | 16:52 |
bauzas | sure | 16:52 |
sean-k-mooney | ill do it now and ping outside fo the meeting | 16:53 |
bauzas | #agreed https://review.opendev.org/c/openstack/nova/+/938523 can be filed as a specless blueprint | 16:53 |
s3rj1k | sean-k-mooney: thanks for spending time on this | 16:53 |
bauzas | cool thanks | 16:53 |
sean-k-mooney | s3rj1k: no worreis i ment to do it before going on pto i just didnt finish hacking on it until i got back on monday | 16:54 |
sean-k-mooney | ok thats if from me | 16:54 |
sean-k-mooney | s/if/it/ | 16:55 |
bauzas | cool | 16:55 |
bauzas | thanks all then | 16:55 |
bauzas | #endmeeting | 16:55 |
opendevmeet | Meeting ended Tue Jan 7 16:55:58 2025 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 16:55 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/nova/2025/nova.2025-01-07-16.01.html | 16:55 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/nova/2025/nova.2025-01-07-16.01.txt | 16:55 |
opendevmeet | Log: https://meetings.opendev.org/meetings/nova/2025/nova.2025-01-07-16.01.log.html | 16:55 |
elodilles | thanks o/ | 16:56 |
s3rj1k | thanks | 16:56 |
gibi | thanks folks | 16:58 |
sean-k-mooney | s3rj1k: bauzas: i think https://blueprints.launchpad.net/nova/+spec/distributed-host-discovery covers it | 17:22 |
s3rj1k | yea, looks good | 17:26 |
luvn | hey guys! in openstack, is it possible — through some workaround — to detach a boot volume to extend and/or retype it? | 17:40 |
luvn | i found an old spec related to this but as far as i know it never moved forward — https://specs.openstack.org/openstack/nova-specs/specs/stein/approved/detach-boot-volume.html | 17:41 |
luvn | so i was just wondering if we have a workaround for that | 17:42 |
dansmith | I want to say you might be able to shelve/offload a BFV instance and take action on the volume, but don't quote me | 17:45 |
sean-k-mooney | dansmith: i think when shleved its in the reserved state, im not sure if cinder will allow exented then but perhaps. | 17:51 |
sean-k-mooney | luvn: but no we dont allow detach of the root volume for bfv guests | 17:51 |
dansmith | yeah idk, but I think it's detached from the host which definitely keeps it pinned | 17:51 |
sean-k-mooney | i think you can do retpe without special workarounds | 17:51 |
sean-k-mooney | im also not sure if you can just do live extened | 17:52 |
sean-k-mooney | you can for non root voluems | 17:52 |
sean-k-mooney | luvn: i assume you tried just extending via cinder and it was unhappy | 17:52 |
dansmith | retyping often means a different connection_info right? | 17:54 |
opendevreview | Douglas Viroel proposed openstack/nova master: WIP - Add support for showing scheduler_hints in server details https://review.opendev.org/c/openstack/nova/+/938604 | 17:55 |
sean-k-mooney | ya but that calls the swap volume api right | 17:55 |
luvn | that's right, i did a quick experiment on devstack and was unsuccessful :( | 17:55 |
luvn | you can't really retype/resize the volume when it's reserved | 17:55 |
dansmith | sean-k-mooney: idk, I didn't think so | 17:55 |
luvn | (which is the state shelving brings it to) | 17:55 |
sean-k-mooney | dansmith: i dont really know of a reason that the root volume shoudl be diffent form a data volume with resepct to resize/retype | 17:56 |
sean-k-mooney | im not say it not diffent just unsure what the reason woudl be for them to behave differntly | 17:56 |
dansmith | sean-k-mooney: well, I guess I don't really know the story here.. I thought swap_volume was mostly for snapshots and things but maybe it should/would work for retype | 17:56 |
dansmith | I guess the detach-and-reattach-elsewhere is the thing that is most special about BFVs | 17:57 |
sean-k-mooney | right becasue we dont want to allow an isntance to not have an assocated root disk | 17:59 |
sean-k-mooney | but retype or extend shoudl not impact that so im not sure either if it can work or not | 17:59 |
sean-k-mooney | shelve is worth a shot but i have never tired if im being honest | 18:00 |
dansmith | sounds like luvn has | 18:00 |
sean-k-mooney | oh because its not allwoed in reserved state | 18:01 |
sean-k-mooney | perhaps its worth askign cinder about why that is. | 18:01 |
luvn | yup, that's right | 18:01 |
dansmith | luvn: and to be clear you've tried with it attached/running as well right? | 18:01 |
luvn | dansmith: i did! i don't really remember the output rn, but it complained about the volume status as well | 18:02 |
dansmith | ack | 18:03 |
dansmith | that's consistent with how I thought it worked | 18:03 |
dansmith | (or didn't) | 18:03 |
luvn | okay, so with the instance running and the volume attached: | 18:04 |
sean-k-mooney | that reminds me that we still have not completed https://review.opendev.org/c/openstack/nova/+/873560 | 18:04 |
sean-k-mooney | https://blueprints.launchpad.net/nova/+spec/assisted-volume-extend | 18:04 |
luvn | resize: Failed to set volume size: Volume is in in-use state, it must be available before size can be extended | 18:04 |
luvn | retype: Failed to set volume type: Invalid volume: Retype needs volume to be in available or in-use state, not be part of an active migration or a consistency group, requested type has to be different that the one from the volume, and for in-use volumes front-end qos specs cannot change. | 18:04 |
luvn | retyping appears to be more "loose" than resize, but still, can't really perform it | 18:05 |
sean-k-mooney | what storage backend are you using out of interest | 18:05 |
luvn | i'm using ceph | 18:06 |
sean-k-mooney | ok ya that should be the most flexiable | 18:06 |
sean-k-mooney | im kind fo torn, i guess this would be a new feature either on the cinder or nova side | 18:08 |
sean-k-mooney | for nova to enabel it we would need a new instance action | 18:08 |
sean-k-mooney | im not sure what woudl eb required on the cidner side to make it work if we did it form cinder insetead or if that is even possible | 18:08 |
sean-k-mooney | there is a complex back and forth between nova and cidner and the storage backend ot orchestrated depending on how the volume is attached and if the isntance is runnign so it wouldnt be a 1 liner to enabel unfrotuently | 18:11 |
luvn | oh, i see | 18:11 |
sean-k-mooney | for example for nfs if the vm is runnign nova need to grow the backign file on the nfs share (it cant be done by cidner) | 18:11 |
sean-k-mooney | for ceph that pretty easy you just need to ask ceph to grow it | 18:12 |
sean-k-mooney | for iscsi/fiber channel cinder woudl have to ask the storage backend to extend it and that may or may not be visabel to the guest before the iscisi export is remonted | 18:12 |
sean-k-mooney | the thing is we do supprot extnedign cidner volume when they are not the root volume (i dont recall in which casese it offline vs online) | 18:14 |
luvn | now i see why ceph is by far the most flexible heh | 18:14 |
luvn | in any case, i sent a message on cinder's channel; let's see if we can get a better insight into why these limitations exist | 18:14 |
luvn | i'm legit curious | 18:14 |
dansmith | likely because three connection mechanisms require three different paths, among other things :) | 18:15 |
dansmith | although in reserved state that should be not a problem I think | 18:15 |
sean-k-mooney | right reserved means for one that the vm is not running so cidner can alwasy do the opertion itself | 18:16 |
sean-k-mooney | the onely reason nova has to grow it for nfs when the vm is runnign is qemu has locked the file on the share | 18:16 |
sean-k-mooney | https://github.com/openstack/nova/blob/master/releasenotes/notes/nova-support-attached-volume-extend-88ce16ce41aa6d41.yaml | 18:18 |
luvn | so i guess there's a world where it's a matter of cinder allowing these operations to be performed when the volume is reserved | 18:19 |
luvn | (of course, if there are no impediments, etc) | 18:19 |
sean-k-mooney | that release note might be out of date and does nto mention BFV but it say "only the libvirt compute driver with iSCSI and FC volumes | 18:19 |
sean-k-mooney | supports the online volume size change." | 18:19 |
sean-k-mooney | we added supprot for extendign in use rbd voluem later | 18:21 |
sean-k-mooney | https://github.com/openstack/nova/commit/eab58069ea6afba875fb0a7d7ac68c7e83ebf14a | 18:21 |
sean-k-mooney | that was stien | 18:21 |
sean-k-mooney | so everythign is in place for ceph for addtional ciner data volumes the vm is running | 18:27 |
sean-k-mooney | there must just be somethign missign for BFV but im not sure whty that could be | 18:28 |
opendevreview | Artom Lifshitz proposed openstack/nova master: DNM: Try vTPM live migration with non-private, non-eph secret https://review.opendev.org/c/openstack/nova/+/938613 | 19:56 |
opendevreview | Artom Lifshitz proposed openstack/nova master: DNM: Try vTPM live migration with non-private, non-eph secret https://review.opendev.org/c/openstack/nova/+/938613 | 20:34 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!