opendevreview | David Hill proposed openstack/nova master: Inject passwords in a rescue call https://review.opendev.org/c/openstack/nova/+/884647 | 00:41 |
---|---|---|
opendevreview | David Hill proposed openstack/nova master: Inject passwords in a rescue call https://review.opendev.org/c/openstack/nova/+/884647 | 00:52 |
opendevreview | David Hill proposed openstack/nova master: Inject passwords in an instance rescue call https://review.opendev.org/c/openstack/nova/+/884647 | 00:55 |
opendevreview | Kiran Pawar proposed openstack/nova-specs master: SR-IOV NIC device tracking in Placement https://review.opendev.org/c/openstack/nova-specs/+/884569 | 04:55 |
*** auniyal8 is now known as auniyal | 04:57 | |
bauzas | good morning folks | 07:08 |
sahid | o/ | 08:44 |
sahid | sean-k-mooney[m]: ++ thanks a lot for your review and points regarding neutronclient replacement I appreciate :-) | 08:46 |
dvo-plv | gibi, bauzas : Hello, are you around today? | 09:22 |
bauzas | dvo-plv: not really, but shoot | 09:23 |
dvo-plv | I would like to ask about review, but if you still have a vacation, I belive that it is unacceptable to ask you for this today | 09:24 |
bauzas | dvo-plv: I'll try to do a few reviews today | 09:29 |
bauzas | sahid: I'll also look at your chnage today | 09:29 |
dvo-plv | Great, thanks, this is a link https://review.opendev.org/c/openstack/nova/+/876075 | 09:34 |
sahid | bauzas: ack thank you ! | 11:59 |
opendevreview | Danylo Vodopianov proposed openstack/nova master: Packed virtqueue support was added. https://review.opendev.org/c/openstack/nova/+/876075 | 12:22 |
opendevreview | Danylo Vodopianov proposed openstack/nova master: Packed virtqueue support was added. https://review.opendev.org/c/openstack/nova/+/876075 | 12:24 |
sahid | sean-k-mooney[m], gibi, a quick question regarding https://review.opendev.org/c/openstack/nova-specs/+/868377 | 12:38 |
sahid | usually want a way for operators to fix the usage of a feature enabled from image properties that by using flavor extra_specs, no? | 12:39 |
sahid | the current state of the spec does not give a chance for operator to define a certain group of flavors allowed to use the feature | 12:40 |
sahid | (or I may have missed some points ;D) | 12:40 |
sahid | (or You already have discussed it ;D) | 12:40 |
LarsErikP | Hi! saw that this failed the tests ~2 weeks ago. any progress? https://review.opendev.org/c/openstack/nova/+/882921 | 12:48 |
Uggla_ | Hello, just to let you know that last week I discovered that readonly is not supported by virtiofs. So at the moment we cannot mount the scaphandre fs in ro. I have discussed that with the virt team, they already have an open ticket about it. | 12:50 |
Uggla_ | So I guess, this is not a blocker. Also, a bind mount can be used to workaround that issue for the moment. | 12:52 |
frickler | LarsErikP: zuul voted verified+1, it's just some broken 3rd party CI that failed, so that patch is in the normal process for waiting for reviews | 13:04 |
bauzas | Uggla_: hmmm, ok, we should discuss this later then | 13:34 |
opendevreview | Michel Nederlof proposed openstack/nova master: Add ability to flatten RBD disks upon clone https://review.opendev.org/c/openstack/nova/+/884595 | 13:54 |
opendevreview | Michel Nederlof proposed openstack/nova master: Add ability to flatten RBD disks upon clone https://review.opendev.org/c/openstack/nova/+/884595 | 14:01 |
gibi | bauzas: I might need to skip today's meeting or disappeare in the middle | 15:22 |
bauzas | ack | 15:22 |
bauzas | reminder : nova meeting in 5 mins here | 15:55 |
bauzas | #startmeeting nova | 16:00 |
opendevmeet | Meeting started Tue May 30 16:00:08 2023 UTC and is due to finish in 60 minutes. The chair is bauzas. Information about MeetBot at http://wiki.debian.org/MeetBot. | 16:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 16:00 |
opendevmeet | The meeting name has been set to 'nova' | 16:00 |
bauzas | #link https://wiki.openstack.org/wiki/Meetings/Nova#Agenda_for_next_meeting | 16:00 |
bauzas | welcome folks | 16:00 |
elodilles | o/ | 16:00 |
dansmith | Oj | 16:02 |
bauzas | mmm | 16:02 |
bauzas | doesn't sound a lot of people around | 16:02 |
bauzas | but we can make it a slow start and hopefully people will join | 16:03 |
bauzas | #topic Bugs (stuck/critical) | 16:04 |
bauzas | #info No Critical bug | 16:04 |
bauzas | #link https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New 15 new untriaged bugs (+0 since the last meeting) | 16:04 |
bauzas | auniyal: any bug you wanted to discuss ? | 16:04 |
bauzas | looks he's not arounf | 16:05 |
bauzas | moving on | 16:05 |
bauzas | #info Add yourself in the team bug roster if you want to help https://etherpad.opendev.org/p/nova-bug-triage-roster | 16:05 |
bauzas | #info bug baton is being passed to bauzas | 16:05 |
bauzas | ok, next topic then | 16:05 |
bauzas | #topic Gate status | 16:05 |
bauzas | #link https://bugs.launchpad.net/nova/+bugs?field.tag=gate-failure Nova gate bugs | 16:05 |
bauzas | #link https://etherpad.opendev.org/p/nova-ci-failures | 16:05 |
bauzas | I'll be honest, gerrit was a bit off my radar since the last week | 16:06 |
bauzas | I plead guilty but I have an OpenInfra prezo to prepare | 16:06 |
bauzas | any CI failures people wanna discuss ? | 16:06 |
dansmith | gate has been not too bad I think | 16:06 |
bauzas | nice | 16:07 |
* gibi lurks | 16:07 | |
bauzas | the very few gerrit emails I glanced were indeed positive | 16:07 |
bauzas | let's assume we're living on a quiet world and move on | 16:07 |
Uggla_ | o/ | 16:07 |
bauzas | #link https://zuul.openstack.org/builds?project=openstack%2Fnova&project=openstack%2Fplacement&pipeline=periodic-weekly Nova&Placement periodic jobs status | 16:07 |
bauzas | the recent failures we've seen on the periodics seem to be resolved ^ | 16:08 |
bauzas | all of them are green | 16:08 |
bauzas | so I guess it's fixed | 16:08 |
bauzas | #info Please look at the gate failures and file a bug report with the gate-failure tag. | 16:08 |
bauzas | #info STOP DOING BLIND RECHECKS aka. 'recheck' https://docs.openstack.org/project-team-guide/testing.html#how-to-handle-test-failures | 16:08 |
bauzas | voila for this topic, anything else to address on that ? | 16:09 |
bauzas | - | 16:09 |
bauzas | #topic Release Planning | 16:09 |
bauzas | #link https://releases.openstack.org/bobcat/schedule.html | 16:09 |
bauzas | #info Nova deadlines are set in the above schedule | 16:09 |
bauzas | #info Nova spec review day next week ! | 16:10 |
bauzas | I should write in caps actually | 16:10 |
bauzas | UPLOAD YOUR SPECS, AMEND THEM, TREAT THEM, MAKE THEM READY | 16:10 |
elodilles | yepp, a reminder on that day might help ;) | 16:10 |
bauzas | elodilles: indeed | 16:10 |
bauzas | #action bauzas to notify -discuss@ about the spec review day | 16:11 |
bauzas | I've seen a couple of new specs | 16:12 |
bauzas | I'll slowly make a round in advance if I have time | 16:12 |
bauzas | next topic if nothing else | 16:12 |
bauzas | , | 16:12 |
bauzas | #topic Vancouver PTG Planning | 16:12 |
bauzas | #info please add your topics and names to the etherpad https://etherpad.opendev.org/p/vancouver-june2023-nova | 16:12 |
bauzas | so, I created an etherpad (actually I reused the one automatically created) | 16:13 |
bauzas | given it will be an exercice to guess who's around and when, I'd love if people could chime into this etherpad their ideas of topics and ideally mention their presences | 16:13 |
bauzas | also, if people not able to join wanna bring some topics worth discussing at the PTG, that'd be nice | 16:14 |
bauzas | not sure we'll have a quorum, but I just hope we could somehow try to have some kind of synchronous discussion at the PTG | 16:15 |
bauzas | so we would capture the outcome of such discussions for following them up after the PTG | 16:15 |
bauzas | I won't lie, that'll be a challenge anyway. | 16:16 |
bauzas | worth helping, | 16:16 |
bauzas | #info The table #24 is booked for the whole two days. See the Nova community there ! | 16:16 |
bauzas | so, yeah, I should stick around this table for the two days, except when I need to present a breakout session or when I have a Forum session to moderate, or depending on my bath needs | 16:17 |
bauzas | (the last one being a transitive result of the number of coffee shots I'll take) | 16:18 |
* gibi drops | 16:18 | |
bauzas | anyway, the word is passed. | 16:18 |
bauzas | #topic Review priorities | 16:18 |
bauzas | #link https://review.opendev.org/q/status:open+(project:openstack/nova+OR+project:openstack/placement+OR+project:openstack/os-traits+OR+project:openstack/os-resource-classes+OR+project:openstack/os-vif+OR+project:openstack/python-novaclient+OR+project:openstack/osc-placement)+(label:Review-Priority%252B1+OR+label:Review-Priority%252B2) | 16:18 |
bauzas | gibi: \o | 16:18 |
bauzas | #info As a reminder, cores eager to review changes can +1 to indicate their interest, +2 for committing to the review | 16:18 |
bauzas | #topic Stable Branches | 16:18 |
bauzas | elodilles: wanna bring some points ? | 16:19 |
elodilles | o7 | 16:19 |
elodilles | #info stable/train is unblocked, as openstacksdk-functional-devstack job is fixed for train | 16:19 |
bauzas | huzzah | 16:19 |
elodilles | \o/ | 16:19 |
elodilles | so: | 16:19 |
elodilles | #info stable gates should be OK | 16:19 |
bauzas | that'll somehow change a bit of the next topics we'll discuss | 16:19 |
elodilles | #info stable branch status / gate failures tracking etherpad: https://etherpad.opendev.org/p/nova-stable-branch-ci | 16:19 |
elodilles | that's all from me | 16:20 |
bauzas | elodilles: thanks | 16:20 |
bauzas | and yeah, I added a bullet point | 16:20 |
bauzas | but it seems to me we don't have quorum today, so I'll reformulate | 16:20 |
bauzas | based on the ML thread https://lists.openstack.org/pipermail/openstack-discuss/2023-May/033833.html, | 16:21 |
bauzas | do people think it's reasonable to EOL stable/train ? | 16:21 |
dansmith | personally I do | 16:22 |
bauzas | I saw the discussion that happened while I was away, and I wanted to reply this morning | 16:22 |
bauzas | but I preferred to defer any reply after this meeting | 16:22 |
dansmith | the point of keeping those around was so communities could form around them if needed, | 16:22 |
dansmith | but if even we (redhat) aren't keeping it up to date, I think that's a good sign that it should go away | 16:22 |
bauzas | dansmith: well, some folks are continuing to backport a few things to train | 16:23 |
bauzas | and we have some stable cores that do reviews on that branch | 16:23 |
bauzas | butn, | 16:23 |
dansmith | but not critical CVEs | 16:23 |
bauzas | as I said in the email, the two critical CVEs don't have a path forward | 16:23 |
bauzas | dansmith: this is unfortunately the problem | 16:23 |
dansmith | okay, I see there's more traffic than I expected | 16:24 |
dansmith | then I guess I don't care | 16:24 |
bauzas | that's not exactly that I'm against backporting the VMDK fix | 16:24 |
dansmith | but I do think it's confusing for people to see a community-maintained repo that is missing such large fixes | 16:24 |
bauzas | but we'll break olso.utils semver if we do so | 16:24 |
dansmith | yeah, idk, I lean towards EOLing | 16:25 |
bauzas | for the other CVE (the brick one), I'd say I don't know how much it would be difficult to propose a backport | 16:25 |
dansmith | the brick fix is not going to be backported AFAIK | 16:25 |
dansmith | and I don't think the cinder one (which is the most important) is being backported past xena | 16:25 |
dansmith | our part of the fix for that CVE is pretty minor and we could backport it, but without the FC part of the brick fix it's not complete | 16:26 |
bauzas | that's why I personnally too lean forward to EOLing stable/train | 16:26 |
dansmith | only debian spoke up about/agains EOLing right/ | 16:26 |
bauzas | correct | 16:26 |
bauzas | I can reply and explain the problem again | 16:27 |
dansmith | shrug | 16:27 |
dansmith | cinder does still have a train branch | 16:27 |
dansmith | and the last thing there was the VMDK fix | 16:27 |
dansmith | so does brick, but the last thing was last summer | 16:28 |
elodilles | i also stated that we could keep open train as long as the gate is working, though indeed it is unfortunate that CVE fixes doesn't get backported :/ | 16:28 |
dansmith | idk, almost seems like there's not enough people to care to even push us one way or the other :) | 16:28 |
bauzas | I would indeed be more inclined if some other project like cinder would make the same step than us quite at the same time | 16:28 |
bauzas | dansmith: elodilles: honestly I guess I'll propose a patch for -eol and people can -1 if they care | 16:29 |
bauzas | that's probably where we'll capture most of the concerns | 16:29 |
dansmith | ack | 16:29 |
bauzas | or we could round forever | 16:29 |
elodilles | ++ | 16:29 |
bauzas | #action bauzas to propose a gerrit change for tagging stein-eol so people could vote on it | 16:30 |
elodilles | you mean train-eol :) | 16:30 |
bauzas | damn | 16:30 |
bauzas | #undo | 16:30 |
opendevmeet | Removing item from minutes: #action bauzas to propose a gerrit change for tagging stein-eol so people could vote on it | 16:30 |
elodilles | stein-eol is long gone for nova ;) | 16:30 |
bauzas | #action bauzas to propose a gerrit change for tagging train-eol so people could vote on it | 16:30 |
bauzas | elodilles: my brain fscked | 16:30 |
bauzas | anyway, I guess we're done on this | 16:31 |
bauzas | #topic Open discussion | 16:31 |
elodilles | ++ | 16:31 |
bauzas | geguileo: you had a point :) | 16:31 |
bauzas | (geguileo) Change to os-brick's connect_volume idempotency | 16:31 |
geguileo | bauzas: yes, thanks | 16:31 |
bauzas | #link https://review.opendev.org/c/openstack/os-brick/+/882841 | 16:31 |
bauzas | #link https://bugs.launchpad.net/nova/+bug/2020699 | 16:31 |
geguileo | So with the latest CVE on SCSI volumes we decided in os-brick to make some changes | 16:32 |
geguileo | Specifically os-brick would remove any existing devices before starting the connect_volume code | 16:32 |
geguileo | and then proceeding with the actual attachment | 16:32 |
geguileo | this means that connect_volume will no longer be idempotent | 16:33 |
geguileo | (which is never something we promised would be) | 16:33 |
geguileo | that seems to break some Nova operations | 16:33 |
geguileo | particularly the rescue/unrescue operations | 16:33 |
dansmith | is there some way we can check to see if we need to run connect_volume()? | 16:34 |
bauzas | geguileo: thanks for spotting this in advance | 16:34 |
dansmith | because otherwise it's hard to know if we're restarting after a recovery or just a service restart | 16:34 |
dansmith | whether or not we should run that | 16:34 |
bauzas | yeah that's one of the problems | 16:35 |
geguileo | dansmith: that could be solved if Nova didn't stash and then unstash the instance config | 16:35 |
geguileo | and instead rebuilt the instance config | 16:35 |
dansmith | meaning the guest xml? | 16:35 |
geguileo | I believe sean-k-mooney[m] mentioned that | 16:35 |
geguileo | dansmith: yes | 16:35 |
bauzas | geguileo: we also recreate the XML on stop/start fwiw | 16:36 |
dansmith | okay, but if we're running on a readonly-root host or recovering from a disaster, none of that would be available after a restart for us | 16:36 |
dansmith | yeah, the guest XML isn't something we can assume hangs around long-term IMHO | 16:36 |
geguileo | dansmith: could nova know if it's attaching the volume for the recovery instance on the same host that was running the instance? | 16:36 |
dansmith | most of nova is designed to assume that it's tempoary | 16:36 |
bauzas | yeah and in general, we don't consider libvirt a source of truth for persisting some instance metadata | 16:36 |
dansmith | bauzas: right | 16:37 |
dansmith | geguileo: for the rescue case specifically? if the instance was already stopped and you go straight to rescue, we wouldn't really know when the last time the connect was run.. could have been weeks ago and multiple host reboots since | 16:38 |
bauzas | would the spec on dangling volumes helping the problem ? | 16:38 |
dansmith | bauzas: I don't think so | 16:38 |
bauzas | I mean, our BDM/volume reconciliation | 16:38 |
dansmith | that's not the problem, it's the underlying host devices | 16:38 |
bauzas | yeah I remember the context of the CVE :) | 16:38 |
bauzas | and I guess only brick knows whether there is a residue ? | 16:39 |
dansmith | geguileo: so let's say we reboot with one instance stopped that points to /dev/vdf, | 16:39 |
dansmith | then we reboot, and someone spawns a new instance, we call connect_volume() for that new instance, it gets /dev/vdf, | 16:39 |
dansmith | then the user starts the old was-stopped instance, we can't look at the instance xml config and know that the volume is wrong right? | 16:40 |
dansmith | because /dev/vdf exists, but it's no longer relevant for this instance | 16:40 |
geguileo | dansmith: well, Nova could check the ID of the volume and see if it matches the information returned by os-brick | 16:40 |
dansmith | point being, there could be multiple instances with disks that point to stale host devices at any given point.. | 16:41 |
geguileo | but I don't know if we want Nova to be on the business of doing those checks | 16:41 |
dansmith | yeah, ideally not | 16:41 |
geguileo | dansmith: yes, that could happen | 16:41 |
bauzas | could nova ask brick such thing ? | 16:42 |
dansmith | geguileo: maybe everywhere we currently do connect_volume() we do a disconnect first and ignore an error? that generates a lot of churn, but maybe that's safer? | 16:42 |
dansmith | bauzas: that's what I was wondering.. if there was some validate_volume() call or something we could run to know if we should run connect | 16:42 |
bauzas | like, before doing connect_volume(), do a check_vol() | 16:42 |
bauzas | ahah | 16:42 |
geguileo | dansmith: os-brick will be doing that already with optimized code | 16:42 |
geguileo | dansmith: that's the issue that I'm bringing, that the new code that cleans things up break the idempotency, so there's no need for nova to do that call you mention | 16:43 |
geguileo | os-brick cannot do a proper check_vol like we would want | 16:43 |
geguileo | because the volume can have changed in the background | 16:43 |
geguileo | for example change the size, point a different volume in the backend (like happened in the CVE) | 16:44 |
dansmith | okay, I'm just not sure how we can know what the right thing to do is | 16:44 |
geguileo | there are a bunch of messes that could happen, and I'm not sure we would want this to depend on nova calling a method | 16:44 |
geguileo | dansmith: well, if you call connect_volume all instances in Nova should use the device it returns from that moment onwards | 16:45 |
bauzas | geguileo: I'm confused, who's responsible for knowing that all connectors are present on the host ? | 16:45 |
bauzas | I thought it was brick | 16:45 |
dansmith | geguileo: okay I thought you didn't want us to do that | 16:45 |
geguileo | what do you mean by connectors? | 16:45 |
bauzas | wrong wording, my bad. | 16:45 |
dansmith | geguileo: or are you just saying any time we call connect_volume() we need to examine the result and make sure the instance is (changed, if necessary) to use that/ | 16:45 |
bauzas | geguileo: I'd say the device number | 16:46 |
geguileo | dansmith: what I don't want is nova to call connect_volume, get the value and stash it (/dev/sdc), then call connect_volume again (/dev/sda) to use it in the rescue instance, and then use the stashed value (/dev/sdc) that no longer exists | 16:46 |
geguileo | dansmith: yes, that's it | 16:47 |
dansmith | geguileo: okay | 16:47 |
geguileo | dansmith: if connect_volume is called, make sure that is used in all the instances that use the same cinder volume on that host | 16:47 |
dansmith | sounds like the solution is that we need to call connect_volume every time we're going to start an instance for any reason, and update the XML (rescue or otherwise) to use that value before we start | 16:47 |
geguileo | dansmith: that would be slower but safe | 16:48 |
geguileo | the optimised mechanism where we could tweak Nova to only do that when necessary could lead to unintended bugs | 16:49 |
bauzas | yeah, I was hoping some mechanism that would prevent the unnecessary roundtrip but if that's hard, then... | 16:49 |
dansmith | yeah I just don't know how we could do it safely otherwise, unless there's something that maintains an exclusive threadsafe set of host devices to make sure we never get them confused | 16:49 |
geguileo | dansmith: if the host doesn't reboot and connect_volume is not called again, then we could assume that the device is correct | 16:50 |
bauzas | geguileo: but we don't know thisq | 16:50 |
bauzas | geguileo: we only know about the service restart | 16:50 |
geguileo | bauzas: could nova query the uptime or something? | 16:50 |
dansmith | no :) | 16:50 |
geguileo | ok, then the whole connect_volume is probably the only way :'-( | 16:51 |
dansmith | let's not do anything like that for host or service uptime.. I think we just need to do the safe/slow approach | 16:51 |
bauzas | geguileo: we have other host-dependent resources like mdevs that we treat this way without requiring to know if it's a reboot | 16:51 |
geguileo | so I guess that's the solution then? | 16:53 |
dansmith | I think so.. not ideal for sure | 16:53 |
bauzas | in general, we follow some workflow which is "lookup my XML, find the specific resources attached, then query whatever needed to know whether those still exist, and if not, ask whatever else to recreate them" | 16:53 |
bauzas | (at service restart I mean) | 16:54 |
geguileo | with changes to cinder drivers we could have a proper check if it's valid, or have the connect-volume be faster (and idempotent when it can) | 16:54 |
geguileo | they would need to provide extra information to validate the device | 16:54 |
dansmith | let's do the slow/safe thing now, and if connect_volume() can be idempotent and faster in the future, then that's cool | 16:55 |
geguileo | dansmith: sounds like a plan | 16:55 |
geguileo | since that way the newer approach would probably not require additional nova changes | 16:55 |
geguileo | just os-brick and cinder | 16:55 |
dansmith | yep | 16:55 |
geguileo | ok, then I have nothing else to say on the topic | 16:56 |
dansmith | geguileo: unrelated, but since you're here.. what's cinder's plan for stable/train? | 16:57 |
dansmith | I assume no backports of this CVE back that far right? | 16:57 |
dansmith | and are you all thinking about EOLing at some point? | 16:57 |
geguileo | dansmith: definitely no backports that far | 16:57 |
dansmith | we have concerns about keeping branches open that look maintained but don't have backports of high-profile CVEs (two for us now) | 16:57 |
geguileo | I believe at some point it was discussed to stop supporting anything before Yoga | 16:58 |
dansmith | ack | 16:58 |
bauzas | anyway, we're on time | 16:58 |
bauzas | the nova meeting is ending in 1 min | 16:58 |
dansmith | thanks geguileo | 16:59 |
geguileo | thank you all | 16:59 |
bauzas | geguileo: dansmith: I guess we've arrived on a conclusion | 16:59 |
bauzas | thanks both of you | 16:59 |
bauzas | and thanks all | 16:59 |
bauzas | if nothing else, | 16:59 |
bauzas | bye | 16:59 |
bauzas | #endmeeting | 16:59 |
opendevmeet | Meeting ended Tue May 30 16:59:59 2023 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 16:59 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/nova/2023/nova.2023-05-30-16.00.html | 16:59 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/nova/2023/nova.2023-05-30-16.00.txt | 16:59 |
opendevmeet | Log: https://meetings.opendev.org/meetings/nova/2023/nova.2023-05-30-16.00.log.html | 16:59 |
elodilles | thanks o/ | 17:00 |
stephenfin | Didn't bring it up in the meeting cos it doesn't belong there, but another round of +2s on the outstanding sqlalchemy-20 stuff would make my week and get us towards unblocking that in Bobcat. sean-k-mooney[m] has already spun through them | 17:01 |
stephenfin | All can be found here https://review.opendev.org/q/project:openstack/nova+topic:sqlalchemy-20+is:open | 17:01 |
stephenfin | Happy to answer any questions you may have on any/all of them | 17:01 |
bauzas | stephenfin: ack, no promises but I'll try to review them soon | 17:02 |
bauzas | stephenfin: bump me again next week if not | 17:03 |
stephenfin | will do, thanks bauzas | 17:03 |
opendevreview | Dan Smith proposed openstack/nova master: Populate ComputeNode.service_id https://review.opendev.org/c/openstack/nova/+/879904 | 17:06 |
opendevreview | Dan Smith proposed openstack/nova master: Add compute_id columns to instances, migrations https://review.opendev.org/c/openstack/nova/+/879499 | 17:06 |
opendevreview | Dan Smith proposed openstack/nova master: Add dest_compute_id to Migration object https://review.opendev.org/c/openstack/nova/+/879682 | 17:06 |
opendevreview | Dan Smith proposed openstack/nova master: Add compute_id to Instance object https://review.opendev.org/c/openstack/nova/+/879500 | 17:06 |
opendevreview | Dan Smith proposed openstack/nova master: Online migrate missing Instance.compute_id fields https://review.opendev.org/c/openstack/nova/+/879905 | 17:06 |
opendevreview | Dan Smith proposed openstack/nova master: Add online migration for Instance.compute_id https://review.opendev.org/c/openstack/nova/+/884752 | 17:06 |
dansmith | So, the last GPF kernel crash we had was a week ago | 17:24 |
dansmith | it was on the new cirros but it doesn't look exactly like what we saw before | 17:25 |
dansmith | but it does look related to acpi hotplug, which I assume is related to the volume attach or detach, and _is_ in one of those volume-having tests | 17:25 |
dansmith | so at this point I'd say the new cirros didn't fix it, but we'll have to wait for a few weeks of data to see if it's better or different | 17:26 |
frickler | dansmith: do you have a link to that crash handy? also we're right now publishing 0.6.2 with an updated kernel, that might help https://github.com/cirros-dev/cirros/actions/runs/5123918187 | 17:44 |
dansmith | frickler: here's the 0.6.1 one: https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_ad6/826755/16/check/nova-lvm/ad65569/job-output.txt | 17:45 |
dansmith | search for "general protection" | 17:45 |
opendevreview | Sylvain Bauza proposed openstack/nova stable/xena: Fix get_segments_id with subnets without segment_id https://review.opendev.org/c/openstack/nova/+/883725 | 17:48 |
opendevreview | Sylvain Bauza proposed openstack/nova stable/xena: Fix get_segments_id with subnets without segment_id https://review.opendev.org/c/openstack/nova/+/883725 | 17:50 |
opendevreview | Sylvain Bauza proposed openstack/nova stable/yoga: Fix get_segments_id with subnets without segment_id https://review.opendev.org/c/openstack/nova/+/883724 | 17:53 |
opendevreview | Sylvain Bauza proposed openstack/nova stable/xena: Fix get_segments_id with subnets without segment_id https://review.opendev.org/c/openstack/nova/+/883725 | 17:55 |
opendevreview | Sylvain Bauza proposed openstack/nova stable/wallaby: Fix get_segments_id with subnets without segment_id https://review.opendev.org/c/openstack/nova/+/883726 | 18:02 |
opendevreview | Merged openstack/nova master: db: Remove the legacy 'migration_version' table https://review.opendev.org/c/openstack/nova/+/872429 | 20:57 |
melwitt | jjm9ASDFGHJJKOL;;;;;'''' | 21:50 |
melwitt | geguileo: w2f | 21:51 |
melwitt | argh sorry | 21:51 |
dansmith | noice | 21:56 |
melwitt | 😆 | 21:59 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!