fungi | yeah, seems like tempest-integrated-compute-enforce-scope-new-defaults is repeatedly timing out (during volume tests but those aren't necessarily what's taking so long) for both the glance and nova changes previously discussed | 00:19 |
---|---|---|
fungi | though the glance one might barely finish just under the wire this time around | 00:19 |
fungi | once it (hopefully) merges, i'll reenqueue and promote the nova change again | 00:20 |
fungi | okay, glance change landed, nova change is back to the top of the gate for a third try | 00:25 |
dansmith | fungi++ | 00:48 |
dansmith | fungi: that new defaults job has been timeout-heavy lately for sure, I think sean had a thing to bump the timeout, which I suppose we might have to do.. | 00:54 |
opendevreview | Dan Smith proposed openstack/nova stable/2024.1: Fix disk_formats in ceph job tempest config https://review.opendev.org/c/openstack/nova/+/923342 | 01:55 |
opendevreview | Dan Smith proposed openstack/nova stable/2023.2: Fix disk_formats in ceph job tempest config https://review.opendev.org/c/openstack/nova/+/923343 | 01:55 |
opendevreview | Dan Smith proposed openstack/nova stable/2023.1: Fix disk_formats in ceph job tempest config https://review.opendev.org/c/openstack/nova/+/923344 | 01:55 |
dansmith | abhishek_: ^ | 01:55 |
dansmith | melwitt: if you're around by chance and could monitor those ^ we need them in ASAP (master still pending) so we can get glance CVE patches merged that depend on them | 01:56 |
abhishek_ | dansmith: ++ | 01:56 |
melwitt | dansmith: ok, I can do it | 02:12 |
dansmith | melwitt: thanks, master has been rechecked several times already, so who knows | 02:14 |
melwitt | yeah. we can at least get some more dice rolls in before EU timezone | 02:16 |
abhishek_ | is there someone in this timezone who can prioritise the patch in gate (just in case if required) | 02:21 |
dansmith | melwitt: ++ | 02:21 |
dansmith | abhishek_: I'm not sure, frickler will be coming online, might be the earliest | 02:21 |
abhishek_ | ack | 02:21 |
dansmith | I think he can do it, but not sure | 02:22 |
abhishek_ | will ask him if required, thank you! | 02:22 |
opendevreview | Merged openstack/nova master: Fix disk_formats in ceph job tempest config https://review.opendev.org/c/openstack/nova/+/923322 | 02:37 |
fungi | yay! | 02:37 |
dansmith | amazing | 02:37 |
dansmith | fungi: do we prioritize the stable ones too or no? | 02:38 |
fungi | might not be necessary this time of day if they already have passing check results and there's not much else in the gate, i haven't looked | 02:39 |
dansmith | ack | 02:41 |
fungi | but if it will help in getting them merged faster we still can | 02:43 |
fungi | just depends on whether there's much of a queue or jobs are continuing to be problematic | 02:43 |
dansmith | abhishek_: ^ dunno how important it is | 02:44 |
abhishek_ | dansmith: I think we can wait, since it will going to take a day long to merge master patches | 02:44 |
dansmith | ack | 02:44 |
abhishek_ | nova merged | 02:46 |
*** bauzas_ is now known as bauzas | 03:32 | |
*** bauzas_ is now known as bauzas | 04:06 | |
frickler | well the gate is currently looking ok-ish after https://review.opendev.org/923344 caused a gate reset. let's hope the current stack merges without further failures. I'll try to keep an eye on it and look into reshuffling if further issues should happen | 05:13 |
frickler | and there goes hoping ... weird early failure in grenade-multinode that I haven't seen before, but looks 99% unrelated https://zuul.opendev.org/t/openstack/build/322833ee6e7a43b0aee2ca9803fc4f57 | 06:00 |
bauzas | good morning folks | 07:27 |
bauzas | frickler: I was on PTO yesterday, so I just got the emails | 07:28 |
bauzas | but AFAICS, we needed to update Tempest rightN? | 07:28 |
bauzas | https://review.opendev.org/c/openstack/nova/+/923322 | 07:28 |
bauzas | I'm now chasing https://review.opendev.org/c/openstack/nova/+/923255/2 | 07:28 |
frickler | bauzas: the latter is failing in gate with the error I posted before, IMO it could be re-enqueued into gate immediately once zuul reports the failures, together with the stack on top of it | 07:32 |
* bauzas digs into https://zuul.opendev.org/t/openstack/build/322833ee6e7a43b0aee2ca9803fc4f57 | 07:33 | |
frickler | I found keystone (once again) to be essentially undebuggable with the amount of errors generated during normal operations | 07:34 |
bauzas | yeah https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_322/923255/2/gate/nova-grenade-multinode/322833e/controller/logs/grenade.sh_log.txt | 07:34 |
bauzas | my only concern is that I'm not sure this is a transient issue | 07:35 |
bauzas | looks more a keystone regression to me | 07:35 |
bauzas | 2024-07-03 05:20:04.386 | testtools.matchers._impl.MismatchError: '56fc53e2e0724c0284ab80060ec55421' != '39aa3070a75e446fb6aa6d48bab74c58' | 07:35 |
bauzas | unless someone keystone-ed able to tell me this is something like a race condition in Keystone | 07:36 |
opendevreview | Merged openstack/nova stable/2024.1: Fix disk_formats in ceph job tempest config https://review.opendev.org/c/openstack/nova/+/923342 | 08:00 |
frickler | bauzas: if it is a regression it should fail in gate next time, too, but I don't see any recent changes in keystone. so I still suggest to reenqueue the stack into gate, unless your prefer to only recheck it | 08:14 |
bauzas | I'll recheck it | 08:15 |
bauzas | and we'll see | 08:15 |
bauzas | done | 08:17 |
opendevreview | Maxime Lubin proposed openstack/nova-specs master: USB over IP https://review.opendev.org/c/openstack/nova-specs/+/923362 | 08:29 |
opendevreview | Maxime Lubin proposed openstack/nova-specs master: USB over IP https://review.opendev.org/c/openstack/nova-specs/+/920687 | 08:43 |
opendevreview | Maxime Lubin proposed openstack/nova-specs master: USB over IP https://review.opendev.org/c/openstack/nova-specs/+/920687 | 08:56 |
opendevreview | Maxime Lubin proposed openstack/nova-specs master: USB over IP https://review.opendev.org/c/openstack/nova-specs/+/920687 | 09:16 |
Uggla | @gibi, can you have a look at this patch : https://review.opendev.org/c/openstack/nova/+/868089/8 can it be merged ? | 09:20 |
frickler | next failure, this time one of the "regulars" tempest.api.compute.servers.test_server_actions.ServerActionsV293TestJSON.test_rebuild_volume_backed_server | 10:07 |
bauzas | yeepeekay | 10:08 |
sean-k-mooney | frickler: bauzas i requeued the first patch but i kind fo feel like waiting for that to merge before queueign the rest | 10:43 |
sean-k-mooney | what do ye think | 10:44 |
sean-k-mooney | shoudl we hold all other nova patches until this lands and ask peopel not to recheck on the mailing list | 10:44 |
sean-k-mooney | the random failure are not nessisarly load related so im not really sure if it will help | 10:45 |
sean-k-mooney | but we are consumign a lot of ci resouces enquing all 4 patches just to have them kicked out because of a failure in the first one | 10:45 |
frickler | sean-k-mooney: on one hand I understand that concern, on the other hand these patches are being used in production in a lot of systems already, so IMO it should have highest priority to at least get them into master, even if other projects have to suffer a bit from that | 11:15 |
sean-k-mooney | ok so your in faovr of quein gthe rest. i was not sure if it would increase the likely hood fo a failure but ok ill recheck the remaining 3 and get them moving | 11:16 |
sean-k-mooney | actully im not sure the second one need a recheck it still has +1 | 11:17 |
sean-k-mooney | i have rechecked the third pathch | 11:18 |
frickler | as I said earlier I would also be in favor of moving these to gate directly | 11:18 |
frickler | as the failures pretty certainly look unrelated | 11:18 |
sean-k-mooney | im not agaisnt taht i just dont want to keep pinging fungi, you or clark ot do that | 11:19 |
zhhuabj | hi team, does anyone know why ci can't run in this patch - https://review.opendev.org/c/openstack/nova/+/909611 | 11:19 |
sean-k-mooney | oh ya they are not related any more. we had a fully green run last night on the last patch so the set as a whole i think is good | 11:19 |
frickler | well currently it is me who keeps pinging nova people with the offer ;) | 11:19 |
sean-k-mooney | i think we have got enouch results form check in agggreate to be ok with mergeing them as is at this point | 11:20 |
frickler | zhhuabj: that patch needs a rebase, see the "not current" on the relation chain | 11:20 |
zhhuabj | thanks frickler, I will ping patch owner to do rebase to have a try, thanks | 11:24 |
sean-k-mooney | zhhuabj: i left some feedback on the parent patch https://review.opendev.org/c/openstack/nova/+/920374 | 11:30 |
sean-k-mooney | the unit tests you added are not inline with our code style, specifcaly you are addming mulitle diffent test cases in a single test method and you are not using asserts correctly. | 11:31 |
frickler | ok, so now I need to refresh my memory on how that "enqueue into gate" actually works ;) | 11:31 |
frickler | hmm, got myself a keycloak account, but that only seems to allow dequeue + promote, need to read more docs | 11:44 |
zhhuabj | hi sean-k-mooney , I saw your comments , thanks for your review. you mentioned using assert_called_with instead of 'assert str(expected_call) == str(actual_call)', deepcopy may change object's memory address as well, so assert_called_with will not work | 12:07 |
fungi | also, i'm done with my morning errands and can help (re)enqueue changes into the gate, reorder the queue, delete verified -2 results, et cetera | 12:07 |
fungi | frickler: i keep forgetting about the webui, just been using the zuul-client cli | 12:08 |
zhhuabj | sean-k-mooney: this is error log when using assert_called_with, we can see actually these two strings are equal, but assert-called-with will throw the exception, so I think that's the reason deepcopy has changed memory address of object - https://paste.openstack.org/show/boR6m5V8Zegg5p9iRXlG/ | 12:10 |
sean-k-mooney | that is proably because instance jobjstefc has changed field tahre are not being printed | 12:11 |
sean-k-mooney | you could compare the object_to_primitive dump but just comaprign the srting wont compare them properly | 12:13 |
frickler | fungi: yes, I just needed to figure out how to drop the V-2 first, but found that in the force-merge guide | 12:13 |
frickler | now I was just a little bit confused because I only touched one change, but the whole stack landed in gate at the same time | 12:14 |
frickler | but it seems all the others had already finished their rechecks | 12:14 |
fungi | zuul will auto-enqueue dependent changes if they meet its requirements | 12:15 |
zhhuabj | sean-k-moonkey: do you mean this one - https://paste.openstack.org/show/bd510w2dMMZdwOOjEvS7/ | 12:25 |
sean-k-mooney | kindof depending on which parmater had the untracked changes | 12:35 |
opendevreview | Takashi Kajinami proposed openstack/os-vif master: Remove old excludes https://review.opendev.org/c/openstack/os-vif/+/917611 | 12:36 |
opendevreview | Zhang Hua proposed openstack/nova master: BlockDeviceMapping object has no attribute copy https://review.opendev.org/c/openstack/nova/+/920374 | 12:52 |
zhhuabj | hi sean-k-mooney , I posted a new change to address your comment - https://review.opendev.org/c/openstack/nova/+/920374/3/nova/tests/unit/virt/libvirt/test_blockinfo.py | 12:54 |
opendevreview | Merged openstack/nova stable/2023.2: Fix disk_formats in ceph job tempest config https://review.opendev.org/c/openstack/nova/+/923343 | 12:56 |
opendevreview | Sahid Orentino Ferdjaoui proposed openstack/nova master: scheduler: fix _get_sharing_providers to support unlimited aggr https://review.opendev.org/c/openstack/nova/+/921665 | 12:58 |
sean-k-mooney | zhhuabj: thansk we are currently focusing on upstream and downstream tasks related to the cve discustion yesterdaty but we will loop back and review when we have time | 13:04 |
opendevreview | Andrei Yachmenev proposed openstack/nova master: Fix processing uniqueness of instance host names https://review.opendev.org/c/openstack/nova/+/923395 | 13:10 |
dansmith | head nova patch is going to fail openstacksdk again | 13:23 |
dansmith | same autoallocate thing it appears :( | 13:25 |
sean-k-mooney | that has been flaky for a while it seams do we want to make it non voting until the cve patches are laneded or just recheck | 13:28 |
dansmith | idk | 13:28 |
sean-k-mooney | we could quickly update it with the flaky decorator | 13:28 |
sean-k-mooney | that will skip if it fails and run it otherwise | 13:28 |
sean-k-mooney | but the sdk team would need to be aware they need to fix it | 13:29 |
frickler | there's a fix already proposed for the sdk job https://review.opendev.org/c/openstack/openstacksdk/+/923379 | 13:33 |
frickler | I guess we should fast-approve that and fix stephenfin's comments in a followup? | 13:36 |
sean-k-mooney | stephenfin: ^ | 13:37 |
sean-k-mooney | yes i think so | 13:37 |
frickler | done and promoted to top of gate. I think we should also dequeue and re-enqueue the nova stack? | 13:41 |
frickler | oh, the promotion took care of that already | 13:42 |
*** bauzas_ is now known as bauzas | 13:46 | |
bauzas | sorry my IRC bouncer has some problems | 13:46 |
bauzas | frickler: where are we now ? | 13:47 |
frickler | bauzas: the nova stack was failing again due to an issue in the sdk job. this is to be fixed by https://review.opendev.org/c/openstack/openstacksdk/+/923379 which I've now promoted to the head of the gate pipeline | 13:48 |
bauzas | ack | 13:49 |
frickler | also https://meetings.opendev.org/irclogs/%23openstack-nova/latest.log.html is always there to help you, but updated only every 15 minutes | 13:49 |
fungi | if you're in a hurry and don't want to wait for the update, the less fancy txt version should be more continuously updated too | 13:59 |
*** whoami-rajat_ is now known as whoami-rajat | 14:00 | |
fungi | the opendevmeet logbot buffers messages somewhat but streams them more quickly to the txt file, then a cronjob periodically generates the html version from that | 14:00 |
fungi | which is the reason for the apparent delay if you're only looking at the html version of the file | 14:03 |
fungi | for example https://meetings.opendev.org/irclogs/%23openstack-nova/%23openstack-nova.2024-07-03.log (we don't have a "latest" redirector or any direct links for those, but could probably add them without too much work) | 14:05 |
*** bauzas_ is now known as bauzas | 14:16 | |
frickler | sdk patch got updated and gate restarted. I'll leave it like that unless the sdk job fails again in the top nova patch (doesn't seem to be happening 100%) | 14:18 |
ygk_12345 | hi folks | 14:19 |
ygk_12345 | can anyone tell me why we need to check resource limits when executing a qemu command in nova-compute using oslo prlimit utility ? | 14:20 |
fungi | ygk_12345: because qemu is not designed for running untrusted images, and broken or malicious images can cause it to consume unlimited resources (among other things) | 14:24 |
ygk_12345 | fungi: u mean something like ddos attacks | 14:25 |
fungi | ygk_12345: well, not ddos (the first d in ddos means "distributed") | 14:31 |
fungi | this would just be a plain old denial of service | 14:31 |
fungi | ygk_12345: though qemu tools can be coerced into doing worse things than just that, which is why we try to deeply inspect image files. see https://security.openstack.org/ossa/OSSA-2024-001.html and https://bugzilla.redhat.com/show_bug.cgi?id=2278875 for examples | 14:34 |
ygk_12345 | fungi: is this a new addition in bobcat release or is it in earlier versions as well ? | 14:38 |
fungi | ygk_12345: the openstack security advisory links to patches for releases as far back as 2023.1 (antelope), but there are also patches in the linked bug that make it possible to backport as far back as victoria (with some additional effort) | 14:39 |
fungi | or possibly even train | 14:40 |
ygk_12345 | fungi: one question though. the prlimit code, is it included in oslo_concurrency tool long back before this vulnerability was reported ? Also, do our antelope setups also have to be patched ? | 14:42 |
fungi | the prlimit work was done previously, yes, in order to mitigate other known resource consumption risks in qemu. i was mentioning yesterday's openstack security advisory and that qemu bug as examples of other risks | 14:45 |
fungi | er, examples of additional risks i mean | 14:45 |
fungi | the patches linked in ossa-2024-001 are new fixes for a vulnerability we just announced yesterday. the prlimit checks have been around much longer | 14:47 |
ygk_12345 | fungi: thanks for the information | 14:59 |
fungi | you're welcome | 15:02 |
*** bauzas_ is now known as bauzas | 15:27 | |
frickler | another gate failure, timeout on tempest-integrated-compute-enforce-scope-new-defaults, luckily only on the 3rd patch and #4 was out already anyway with sdk failure. will re-enqueue once zuul finishes on them https://zuul.opendev.org/t/openstack/build/4c80df617e06413e9635e0968947809f | 16:19 |
frickler | hmm, that did reset the whole bunch of 15 other bunches behind it, though. so CI will be fully loaded for quite some more time | 16:23 |
fungi | tempest-integrated-compute-enforce-scope-new-defaults is what i was seeing repeated timeouts on yesterday as well | 16:23 |
opendevreview | Merged openstack/nova master: Reject qcow files with data-file attributes https://review.opendev.org/c/openstack/nova/+/923255 | 16:23 |
frickler | \o/ 1 done, 20 or so to go? ;) | 16:23 |
fungi | 20 or so if you're just talking about nova. across the three affected projects and four maintained branches for the full ossa it's about 50 changes | 16:24 |
fungi | maybe more now | 16:27 |
fungi | i think we linked 48 in the advisory, but that's not including the additional testing fixes that ended up going in yesterday | 16:27 |
frickler | yes, we should seriously reconsider whether doing more than a single patch per branch is really a good idea before next time | 16:37 |
frickler | unless of course we fix all gate stability and capacity issues until then :-D | 16:38 |
clarkb | its probably also worth considering reducing the bar set by the gate (both prior to major time sensitive changes showing up and generally) | 16:38 |
fungi | well, usually we try very, very, VERY hard not to do feature development for emergency security fixes and make them as concise and minimal as possible. this issue was an exception because of discovering that we basically needed to rip out any reliance on qemu-img being safe to rely on for security-sensitive codepaths | 16:39 |
clarkb | once upon a time removing jobs that were unreliable was a relatively common expectation (this is part of the reason we have the silent and experimental pipelines (though maybe silent is gone now)) | 16:40 |
frickler | well that's more questionable IMO. the high bar did at least notice a regression in cinder that went unnoticed before | 16:40 |
clarkb | yes it is a tradeoff. The ideal is that if you've got a test job because you believe it is valuable to run that it will also be maintained to keep it reliable | 16:41 |
clarkb | but unfortunately doing so is often difficult and openstack has basically never managed to do so consistently over time and constantly flip flops between sad and more happy states | 16:41 |
fungi | the original fix to just address the problem initially described in the bug report, and for which the ossa text focuses on, was going to be a single concise patch for each service | 16:42 |
fungi | it was complicated by the discovery that the method we'd been relying on already to do those things isn't considered safe by the maintainers of the tool we were using for it | 16:42 |
fungi | and when we started down that road, the qemu maintainers had made it clear to us that they had no intention of fixing it to be safe in any immediate timeframe | 16:43 |
fungi | (turns out they did later decide to release a security fix for qemu anyway, but the approach we ended up with should be a lot more future-proof) | 16:44 |
opendevreview | Merged openstack/nova master: Check images with format_inspector for safety https://review.opendev.org/c/openstack/nova/+/923256 | 17:18 |
dansmith | sean-k-mooney: melwitt: gibi: bauzas: Tempest tests for glance import are going to fail in our ceph job when glance patches merge and until a tempest fix is merged, which is currently stuck waiting on tempest-core | 18:02 |
dansmith | we can either mark that job n-v (which is easy) or try to strategically skip those tests, or we can just wait for tempest when gmann shows up and sees | 18:02 |
dansmith | what's your preference? | 18:02 |
dansmith | okay gmann to the rescue | 18:04 |
dansmith | fungi: abhishek_ I think gmann is +Wing now | 18:04 |
gmann | just logged in, checking | 18:04 |
gmann | dansmith: abhishek_: this one also right ? https://review.opendev.org/c/openstack/tempest/+/923357 | 18:05 |
dansmith | fungi: so I would prioritize *those* and drop abhi's patch I think | 18:05 |
fungi | okay... | 18:05 |
dansmith | yes | 18:05 |
fungi | just a sec | 18:05 |
abhishek_ | and this one as well, https://review.opendev.org/c/openstack/tempest/+/923352 | 18:05 |
fungi | 923357,3 and 923352,1 both for tempest | 18:07 |
abhishek_ | yes | 18:07 |
dansmith | I hope fungi is doing blowing smoke from his finger guns after each of these like I imagine in my head | 18:08 |
fungi | if only i were that cool | 18:08 |
abhishek_ | you are | 18:08 |
dansmith | +2 | 18:09 |
gmann | abhishek_: +w another one also, one comment for releasenotes but can be done in follow up https://review.opendev.org/c/openstack/tempest/+/923357/3/tempest/config.py#648 | 18:09 |
fungi | and both have approvals now, so doing | 18:09 |
gmann | ++ | 18:09 |
dansmith | thank you gmann | 18:09 |
gmann | np! sorry for login late. my toddler took a lot of time for his breakfast today :) | 18:10 |
fungi | okay all done and dequeued glance's 923433,2 at the same time | 18:10 |
abhishek_ | gmann: ack, thank you! | 18:11 |
dansmith | fungi: do it do it! | 18:11 |
abhishek_ | fungi ++ | 18:11 |
* fungi blows smoke off the end of his finger-gun | 18:11 | |
dansmith | YASS | 18:11 |
abhishek_ | :D | 18:12 |
fungi | count on me for all your fanservice needs | 18:12 |
dansmith | heh | 18:12 |
sean-k-mooney | dansmith: looks like gmann +w'd it | 18:19 |
sean-k-mooney | so i guess we want to wait for https://review.opendev.org/c/openstack/tempest/+/923352 to merge | 18:19 |
dansmith | sean-k-mooney: yes, it just unfolded as soon as I brought it up :D | 18:19 |
dansmith | sorry for the noise | 18:19 |
fungi | and both merged about 35 minutes ago now | 20:36 |
dansmith | ++ | 20:51 |
*** haleyb is now known as haleyb|out | 21:18 | |
fungi | looking into options to less-disruptively speed merging for the remaining security patches... are there any other changes not explicitly mentioned in the ossa which i should avoid sending to the back of the line? | 21:23 |
fungi | going to start fiddling with some surgical queue reordering in about an hour once i get my dinner down | 21:25 |
melwitt | fungi: https://review.opendev.org/c/openstack/nova/+/923344 is a backport that some glance CVE patches depend on | 21:27 |
melwitt | (that I don't see mentioned in the ossa) | 21:27 |
sean-k-mooney | melwitt: that not part of the cve fix really | 21:56 |
sean-k-mooney | melwitt: thats just a ci change | 21:56 |
melwitt | sean-k-mooney: I know, but it's my understanding that it's required for some of the glance CVE backports. so it probably should be avoided to send to the back of the line | 21:57 |
clarkb | the entire gate just reset I think | 22:02 |
clarkb | so now is a good time to reorder cc fungi | 22:02 |
sean-k-mooney | melwitt: ah good point | 22:02 |
fungi | just got back | 22:11 |
fungi | assembling the list now | 22:12 |
fungi | mmm, this is going to be tricky because i have to pass in both the project name and the change,revision for each change i evict and re-add, so a simple loop won't suffice | 22:23 |
dansmith | yeah, that one is required for the glance on the same branch | 22:23 |
dansmith | poor glance hasn't merged anything yet, but the head change is in gate right now | 22:25 |
fungi | should https://review.opendev.org/c/openstack/glance/+/923433 be preserved too? it switches a job to non-voting | 22:26 |
dansmith | fungi: this got kicked out of gate but should be able to right back in: https://review.opendev.org/c/openstack/nova/+/923258/3 | 22:26 |
dansmith | and the parent of it is in gate now I think | 22:26 |
fungi | i can insert it between the dequeue and reenqueue steps, sure | 22:27 |
dansmith | fungi: I think abhi still wanted that mostly to insulate them from job timeouts, but it's not strictly required and I think it'd be better if we don't drop all that testing now that tempest is merged | 22:27 |
dansmith | so you'd have my blessing to kick that out and make space, or I can if you want | 22:27 |
fungi | wfm | 22:27 |
fungi | adding it to the list of stuff i'll move to the back of the line | 22:28 |
dansmith | ++ | 22:28 |
fungi | okay, surgery complete | 22:36 |
fungi | though in retrospect, as none of the changes at the top were being kept, it ended up being the same as if i'd done a promote of the ossa changes i guess | 22:37 |
fungi | i guess i expected there to be more changes from the ossa already approved | 22:45 |
dansmith | meaning the stable ones? | 22:46 |
fungi | turned out there were only 5 in the gate (plus the one that was kicked out and re-added) | 22:46 |
dansmith | all the master ones should be approved I think | 22:46 |
dansmith | given how things have been going with the insane cross-dependencies I think we've been kinda trying to make sure they are merging in release order | 22:46 |
dansmith | but if you really want them all +Wd we can probable just do that | 22:47 |
fungi | nah, it's okay | 22:47 |
dansmith | I've heard great things about melwitt's +W finger | 22:47 |
melwitt | lol | 22:47 |
melwitt | yup, I'm really great at clicking buttons | 22:47 |
*** bauzas_ is now known as bauzas | 23:43 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!