JayF | zigo: bobcat is not 2.0. If you have a need on a timeline please put that on the list, although it won't change bobcat it can help decide future priorities | 00:51 |
---|---|---|
*** ozzzo1 is now known as ozzzo | 07:02 | |
zigo | JayF: I thought it was 2.0. Yes, I do need SQLA 2.0 support in Bobcat, and will write about it in the list, thanks for your care! | 07:18 |
bauzas | zigo: the situation is quite simple, most projects try to support the new DSL of 1.4 | 07:28 |
bauzas | which is considered a bridge release speaking old pre-1.4 and new 2.0 style | 07:28 |
bauzas | but then we need some guidance, as for the moment, none of the distros ship SQLA 2.0 | 07:29 |
bauzas | it's then a classic chicken and eggs situation, if everyone awaits the other :D | 07:29 |
frickler | also see https://review.opendev.org/c/openstack/requirements/+/879743 for a (possibly incomplete) list of projects that still struggle | 07:36 |
zigo | Well, for Debian, you should consider that Experimental represent the next OpenStack stable, so SQLA 2.x is the way ... | 07:38 |
zigo | frickler: Where do I see it in that patch?!? | 07:38 |
frickler | zigo: look at the failing cross jobs. I just saw https://review.opendev.org/c/openstack/requirements/+/887261/comments/6b1f2573_60f48f1c has some more detailed analysis of the issues. but this is getting a bit offtopic for the nova channel | 07:41 |
zigo | frickler: So, if I read it well, we are down to only 4 packages, the biggest concern being heat, but it's probably still working, right? | 08:04 |
frickler | heat seems very strongly broken, see https://review.opendev.org/c/openstack/heat/+/886549 and also https://08a8510b7f90952cf4e9-7691b2eb12cb7f63209fe7ead2f9c479.ssl.cf1.rackcdn.com/887261/2/check/openstacksdk-functional-devstack/f9d3c13/controller/logs/screen-h-api.txt | 08:11 |
bauzas | stephenfin: please, just one again needed modification for sg associations in https://review.opendev.org/c/openstack/nova/+/860829 | 09:10 |
sean-k-mooney | zingo so timeline wise caracal (2024.1) will still be based on ubuntu 22.04 and the D cycle 2024.2 shoudl be based on 24.04 | 09:32 |
sean-k-mooney | zigo: ^ for nova i woudl liek to merge the final patches so that we can run on 2.0 and im hoping the rest of openstack catches up in caracal | 09:32 |
sean-k-mooney | but im not exepcting it to become our default until D | 09:33 |
sean-k-mooney | althoguh i woudl be fine with nova runing most of our jobs with 2.0 provided we still had 1.4 coverage in at least one job | 09:33 |
sean-k-mooney | right now we have the opisite situation where we run most of our jobs on 1.4 but have a singel job testing 2.0 and we enabel all the wranings in yoru unit/funcitonal tests | 09:34 |
sean-k-mooney | once we are 2.0 comptaible it woudl be really nice if we could make that warning an error by the way | 09:34 |
sean-k-mooney | zigo: i dont know if you can take the same approch tumbleweed did nan have a second package for SQLA | 09:36 |
zigo | sean-k-mooney: I can't, that's not how Debian works. | 09:36 |
sean-k-mooney | they moved the main package to 2.0 and added a second package to track 1.4 | 09:36 |
zigo | There would be a path clash ... | 09:36 |
zigo | "import sqlalchemy" would need to be rewritten to "import sqlalchemy1" or something ... | 09:37 |
sean-k-mooney | sure on any one system | 09:37 |
sean-k-mooney | you could only have one | 09:37 |
zigo | Plus many many hacks. | 09:37 |
sean-k-mooney | but you could declare one package as conflicting with the other | 09:37 |
zigo | First, that's forbidden by the Debian policy to do that. Second, I'm not packaging for chroot or venvs... packages are supposed to be co-installable. | 09:38 |
sean-k-mooney | well tere are packages on debian that do that | 09:38 |
sean-k-mooney | and yes i know they are ment ot be co installable | 09:38 |
sean-k-mooney | i was just giving you options since bobcat wont be 2.0 compatiable in its entirity | 09:38 |
sean-k-mooney | althoguh caracal should be | 09:39 |
zigo | It's packages that are supposed to be different implementation of the same functionality, like gawk vs mawk. This is *NOT* for having multiple versions of the *SAME* package. | 09:39 |
zigo | So, your option is unfortunately not an option ... :/ | 09:40 |
zigo | sean-k-mooney: Really, my only viable option is to have patches backported from master ... | 09:40 |
sean-k-mooney | ya where i saw it was differnt pipewire backend plugins | 09:40 |
sean-k-mooney | zigo: so it was not a comunity goal to get this done this cycle | 09:42 |
sean-k-mooney | so they may or may not be written | 09:42 |
sean-k-mooney | that said several of us really wanted to get to that point since it was discussed as a possibel goal a few times | 09:42 |
zigo | This fact (that it was not a comunity goal) doesn't change anything to the situation, unfortunately ... | 09:42 |
zigo | I can live with a broken heat package in Unstable for some time, that's ok-ish, the backport will probably work. | 09:43 |
sean-k-mooney | well the question is do you need bobcat to run with 2.0 for debian or is the 2.0 change going to happen later | 09:43 |
zigo | But it has to be taken seriously and it'd be best if we have a patch soon. | 09:43 |
zigo | I'd have to discuss with the SQLAlchemy maintainer... | 09:44 |
zigo | Though he's pressing to have the package migrate to Unstable ... | 09:44 |
sean-k-mooney | what im really getting at is what was your orgianl plan and timeline | 09:44 |
sean-k-mooney | ya so i think it woudl be goood to get it in unstable | 09:44 |
sean-k-mooney | so that we can hopefuly get it in ubuntu 24.04 | 09:44 |
zigo | My original plan was to have Bobcat support 2.0, and upload it together with OpenStack when Bobcat is released. | 09:44 |
sean-k-mooney | ack | 09:45 |
zigo | I may follow that path even with a broken heat package in Unstable... | 09:45 |
sean-k-mooney | for what its worth i was really really hoping all openstack whoudl have got to 2.0 compatibliy in bobcat too | 09:45 |
zigo | FYI, that's ortogonal to maintaining the unofficial bookworm-bobcat backport repository that most Debian users will consume instead of unstable/testing. | 09:45 |
opendevreview | Stephen Finucane proposed openstack/nova master: db: Replace use of backref https://review.opendev.org/c/openstack/nova/+/860829 | 09:46 |
opendevreview | Stephen Finucane proposed openstack/nova master: objects: Stop fetching from security_groups table https://review.opendev.org/c/openstack/nova/+/860850 | 09:46 |
opendevreview | Stephen Finucane proposed openstack/nova master: db: Remove unused relationships https://review.opendev.org/c/openstack/nova/+/894635 | 09:46 |
zigo | Also, the real question we should ask zzzeek about: how come he's breaking everyone ?!? | 09:46 |
zigo | The Linux kernel is *never* breaking userland, I don't see why this must happen every weekend in the Python ecosystem. :( | 09:46 |
sean-k-mooney | zigo: this is not zzzeek fault | 09:47 |
zigo | I admit I don't have the details (didn't have time to look). | 09:47 |
sean-k-mooney | zigo: in fact the 1.4 bridge release is them going well out of there way to provide a smooth upgrade path | 09:47 |
WJeffs | Hey all, is anyone running AMD genoa cpus, that I could query some stuff to them? | 09:48 |
zigo | Well, there should be no need for an upgrade path, there should be backward compat forever, that's the point I'm making. | 09:48 |
sean-k-mooney | zigo: right which as a software maintainer you hsould knwo woudl be unresable ot continue in a supprot way | 09:48 |
sean-k-mooney | he coudl freeze the old apis | 09:48 |
sean-k-mooney | but there were fundemetal changes to the core | 09:49 |
sean-k-mooney | to use and adpat to new python changes | 09:49 |
zigo | How do you explain the Linux kernel does it the proper way then? | 09:49 |
sean-k-mooney | and to solve some correctness issues that needed the api to change | 09:49 |
zigo | And the kernel is a *WAY* bigger and complicated... | 09:49 |
sean-k-mooney | zigo: by letting the old api bit rot | 09:50 |
zigo | Well, the current situation shows it was the way to go... :) | 09:50 |
sean-k-mooney | im not really sure it does | 09:51 |
sean-k-mooney | we are not going to be droping 1.4 support in openstack for some time | 09:51 |
sean-k-mooney | and i dont think zzzeek plans ot dicontinue it for some tiem either | 09:51 |
sean-k-mooney | yes the fact breakign changes are required is unfortunet | 09:52 |
sean-k-mooney | that does not mean we shoudl never do them just in frequesntly when its the best option | 09:52 |
zigo | Don't get me wrong, I do love SQLAlchemy, though in the past, every upgrade to a minor version has broke all of OpenStack. The switch from 1.4 to 2.x is kind of very painful, as we're discussing. Well, that's *very* bad practice that zzzeek should learn to *not* reproduce ever again. | 09:53 |
sean-k-mooney | its actully not that painful to adapt too | 09:54 |
sean-k-mooney | but it was not a priorty for most of the maintainers | 09:54 |
sean-k-mooney | only a few have wanted to spend time on it and done most of the work | 09:54 |
zigo | But the old code should still be working with the new version ... | 09:54 |
sean-k-mooney | it is in 1.4 which is a long term supprot branch to make it work | 09:54 |
zigo | Doesn't work for me... | 09:55 |
sean-k-mooney | zigo: to be clear some of the api had diffent behavior on diffent backend due to bugs | 09:55 |
zigo | Only a single version will live in the distro, as we discussed... | 09:55 |
sean-k-mooney | that coudl not be fixed without breakign changes | 09:55 |
zigo | Ok. | 09:55 |
zigo | :) | 09:55 |
zigo | This makes a lot more sence then. | 09:55 |
sean-k-mooney | its becasue theere is no one SQL | 09:55 |
sean-k-mooney | thers is TSQL PSQL and what ever mariadb calls it | 09:56 |
zigo | Maybe zzzeek can help fixing Heat and oslo.db? | 09:56 |
sean-k-mooney | oslo.db i think basically works | 09:56 |
sean-k-mooney | heat im not sure about | 09:56 |
zigo | Oh, btw, should I revert my packaging to oslo.db 12.3.2 ? | 09:56 |
sean-k-mooney | stephenfin: ^ | 09:57 |
zigo | (ie: leave 14.0.0 in Experimental, and just bump to 12.3.2 in unstable...) | 09:57 |
zigo | Anyways thanks a lot for discussing the mater with me, that's very helpful. | 09:58 |
zigo | At least, I know what's going on. | 09:58 |
sean-k-mooney | https://review.opendev.org/q/topic:sqlalchemy-20+status:open | 09:58 |
sean-k-mooney | that is the topic wiht all the pending packages | 09:58 |
sean-k-mooney | which unfortunelly still has nova and placement in it | 09:59 |
* zigo bookmarked this and will probably use unmerged patches | 10:06 | |
zigo | stephenfin: Most patches are from you, so I have to thank you as well ! :) | 10:09 |
sean-k-mooney | yep stephenfin went out of there way to try and update proejct that did nto have peopel steping up to do it | 10:10 |
sean-k-mooney | unfortuetly that still needs that projects core team to review it | 10:10 |
bauzas | sean-k-mooney: again, please understand that we have review priorities | 10:25 |
bauzas | and sqla 2.0 isn't really a prio | 10:25 |
sean-k-mooney | bauzas: it really was ment to be | 10:26 |
sean-k-mooney | and i find the fact you didnt consider it to be one a probelm | 10:26 |
sean-k-mooney | it means we are not comunicating what our priorites are as a team properly | 10:27 |
stephenfin | bauzas: I struggle to see how adapting to the new major version of a critical library is anything but a priority, especially when it's being treated as such by every other project team | 10:44 |
bauzas | sorry was at lunch | 11:53 |
bauzas | I don't want to argue about why we didn't had time to review yet the series, but I'm just trying to merge it before RC1 | 11:53 |
bauzas | so, please understand my concerns and the fact that we still also need to review other changes | 11:54 |
bauzas | stephenfin: will you have time to upload a new PS due to my comment ? | 12:08 |
stephenfin | I already did, but it's crashing and burning | 12:09 |
stephenfin | https://review.opendev.org/c/openstack/nova/+/860829/3 | 12:09 |
sean-k-mooney | that looks like the change you made is actully the problem | 12:11 |
sean-k-mooney | and the previous version was likely correct | 12:11 |
stephenfin | yup | 12:11 |
stephenfin | I did test with another reproducer locally but I must have missed something | 12:11 |
sean-k-mooney | well its actully failing in tempest too | 12:12 |
sean-k-mooney | so its not just a test artifact | 12:12 |
sean-k-mooney | sqlalchemy.exc.InvalidRequestError: One or more mappers failed to initialize - can't proceed with initialization of other mappers. Triggering mapper: 'mapped class BlockDeviceMapping->block_device_mapping'. Original exception was: Instance.block_device_mapping and back-reference BlockDeviceMapping.instance are both of the same direction symbol('MANYTOONE'). Did you mean to | 12:13 |
sean-k-mooney | set remote_side on the many-to-one side ? | 12:13 |
bauzas | that's why I want to be on par with the existing | 12:14 |
bauzas | I'm very afraid of any performance hit we may introduce if we wrongly do | 12:14 |
sean-k-mooney | it was workign before he added the extra changes | 12:14 |
bauzas | the problem here is that we don't know if the other-way relationship is needed | 12:14 |
sean-k-mooney | you mean even though it passses tempest, unit test, functional test adn api test | 12:15 |
sean-k-mooney | without this change | 12:15 |
sean-k-mooney | you still think we dont actuly have a stong enough indication that its not used | 12:15 |
sean-k-mooney | the revers relateion would be lookign up instance by a security group | 12:17 |
sean-k-mooney | https://docs.openstack.org/api-ref/compute/#list-security-groups-by-server | 12:17 |
sean-k-mooney | the proxy api only allows you to look up security groups of an instnace | 12:17 |
sean-k-mooney | so we have no public api for the reverse coralation | 12:18 |
opendevreview | Stephen Finucane proposed openstack/nova master: db: Replace use of backref https://review.opendev.org/c/openstack/nova/+/860829 | 12:18 |
opendevreview | Stephen Finucane proposed openstack/nova master: objects: Stop fetching from security_groups table https://review.opendev.org/c/openstack/nova/+/860850 | 12:18 |
opendevreview | Stephen Finucane proposed openstack/nova master: db: Remove unused relationships https://review.opendev.org/c/openstack/nova/+/894635 | 12:18 |
bauzas | sean-k-mooney: yeah, while it was difficult to find whether we use instance.bdm, I just guess that instance.securit_group is way less largely used :) | 12:21 |
sean-k-mooney | https://github.com/openstack/nova/blob/master/nova/db/main/api.py#L2992-L3228 | 12:21 |
sean-k-mooney | looking at the code there is nothign that woudl be trying to look up instance by the security group | 12:22 |
sean-k-mooney | the closest thing is security_group_in_use | 12:22 |
sean-k-mooney | and that jsut directly uses the SecurityGroupInstanceAssociation table | 12:22 |
sean-k-mooney | bauzas: anyway looks like stephen removed foreign_keys=uuid, | 12:25 |
sean-k-mooney | so that presumabnle fixes it locally? | 12:25 |
stephenfin | yup | 12:25 |
sean-k-mooney | but honestly i woudl have been much happer merging the older version before we added this. i dont want stephen to respien to drop it | 12:25 |
stephenfin | shouldn't have been there since there's no foreign key column on that side (many-to-one) | 12:25 |
sean-k-mooney | but i dont like adding dead code | 12:25 |
bauzas | the code is already dead | 12:28 |
bauzas | stephenfin explained me that backref implicetely creates a return relationship | 12:28 |
bauzas | so we already have this, silently | 12:28 |
bauzas | some "code" may use this implicit relationship for getting the values | 12:29 |
bauzas | like, instance.bdm | 12:29 |
bauzas | fortunately, we have nova objects that are the DB facade | 12:29 |
auniyal | while writing a functional test for server having tags I am getting this error: https://paste.opendev.org/show/bM4cijMNqvGY70FuEDFG/ | 12:39 |
auniyal | I have set api_version = '2.43' | 12:39 |
auniyal | because its required in CLI | 12:40 |
auniyal | is there anything else I should look for ? | 12:40 |
auniyal | bauzas, sean-k-mooney ^ | 12:41 |
sean-k-mooney | its tags not tag | 12:49 |
sean-k-mooney | well there are two ways | 12:49 |
sean-k-mooney | what eactully are you executing | 12:49 |
sean-k-mooney | you can add a tag by doing a put to /servers/{server_id}/tags/{tag} | 12:50 |
sean-k-mooney | or you do a put to /servers/{server_id}/tags with {tags": ["tag1", "tag2"]} as the boday | 12:50 |
sean-k-mooney | https://docs.openstack.org/api-ref/compute/#server-tags-servers-tags | 12:51 |
sean-k-mooney | if your doing this as part of server create its also tags not tag | 12:52 |
auniyal | I am adding tag while creating server - https://paste.opendev.org/show/bJBibgysdZSdO5iMAZEg/ | 13:06 |
sean-k-mooney | that not adding a tag to a server | 13:06 |
sean-k-mooney | thats addign a tag to a network | 13:06 |
sean-k-mooney | its not the same thing | 13:06 |
auniyal | ack | 13:06 |
auniyal | so I am trying device tagging | 13:07 |
auniyal | once device is tagged, I'll verify it in metadata.json | 13:07 |
sean-k-mooney | per the api docs | 13:07 |
sean-k-mooney | A bug has caused the tag attribute to no longer be accepted starting with version 2.37. Therefore, network interfaces could only be tagged in versions 2.32 to 2.36 inclusively. Version 2.42 has restored the tag attribute. | 13:07 |
sean-k-mooney | so in the network you call it tag | 13:08 |
sean-k-mooney | and your using 2.43 | 13:08 |
sean-k-mooney | so that shoudl be fixed | 13:08 |
auniyal | yes, 2.43 | 13:08 |
sean-k-mooney | so https://paste.opendev.org/show/bE1IV21v1cdSzRKNWKrw/ | 13:09 |
sean-k-mooney | should be valid | 13:09 |
sean-k-mooney | just s/tags/tag/ | 13:09 |
sean-k-mooney | https://github.com/openstack/nova/commit/e80e2511cf825671a479053cc8d41463aab1caaa | 13:11 |
sean-k-mooney | that is the change that fixed it | 13:11 |
auniyal | this is full what I have right now - https://paste.opendev.org/show/bymbRPlBXG3nmY1Nm4zy/ | 13:12 |
sean-k-mooney | this is the api sample test that validates it https://github.com/openstack/nova/blob/master/nova/tests/functional/api_sample_tests/api_samples/servers/v2.42/server-create-req.json.tpl | 13:13 |
sean-k-mooney | you do not need admin_api=ture by the way | 13:14 |
auniyal | ack | 13:15 |
sean-k-mooney | so i think the problem is | 13:15 |
sean-k-mooney | self.api = self.useFixture( | 13:15 |
sean-k-mooney | nova_fixtures.OSAPIFixture(api_version='v2.1')).api | 13:15 |
sean-k-mooney | you are seting api_version but i dont think that will work as you expect | 13:16 |
bauzas | sorry auniyal can't help, today is the last day before RC1 | 13:17 |
bauzas | IMHO you need to set the microversion correctly | 13:18 |
auniyal | so yeah, I tried this nova_fixtures.OSAPIFixture(api_version='v2.42')) and got https://paste.opendev.org/show/bHOTRcnwwpPb1lP8BbH5/ | 13:18 |
bauzas | that's not how you set the microversion | 13:18 |
sean-k-mooney | ya so when we set version in teh class like that | 13:18 |
bauzas | please look at other tests | 13:18 |
sean-k-mooney | we have explcitly coded the base class to use them | 13:18 |
bauzas | you have a class value | 13:18 |
sean-k-mooney | (test.TestCase, integrated_helpers.InstanceHelperMixin) wont make api_version = '2.42' | 13:19 |
sean-k-mooney | work at the class level | 13:19 |
sean-k-mooney | this is hte base your inheriting form | 13:20 |
sean-k-mooney | https://github.com/openstack/nova/blob/53012f1c55072c42ced267a2b1adef0a669d9f45/nova/test.py#L151 | 13:20 |
sean-k-mooney | the functionaltiy you are tyring to use si part of _IntegratedTestBase | 13:23 |
sean-k-mooney | https://github.com/openstack/nova/blob/53012f1c55072c42ced267a2b1adef0a669d9f45/nova/tests/functional/integrated_helpers.py#L1239 | 13:23 |
sean-k-mooney | it does https://github.com/openstack/nova/blob/53012f1c55072c42ced267a2b1adef0a669d9f45/nova/tests/functional/integrated_helpers.py#L1307-L1311 | 13:24 |
sean-k-mooney | if this is a regression test you shoudl just set the microversion on the fixture | 13:24 |
auniyal | microversion as latest | 13:24 |
sean-k-mooney | if this is for geneal testing then you proably shoudl be extendign the exitig test | 13:24 |
sean-k-mooney | https://github.com/openstack/nova/blob/53012f1c55072c42ced267a2b1adef0a669d9f45/nova/tests/functional/test_servers.py#L61C7-L61C18 | 13:25 |
sean-k-mooney | auniyal: microveion as lates means nothing of your parten classes dont read that class veraible | 13:25 |
sean-k-mooney | thats the point im making | 13:25 |
sean-k-mooney | so first question is why are you writing this test? | 13:26 |
auniyal | this bug https://bugs.launchpad.net/nova/+bug/1836389 | 13:27 |
auniyal | I am working on reproducer of it | 13:27 |
sean-k-mooney | ok so if its a regression test | 13:27 |
sean-k-mooney | then you should jsut set the api version in the fxiture defintion | 13:27 |
auniyal | also as I could not find many sonple deve tagging functional test, so thoguth I create for all devices | 13:28 |
sean-k-mooney | not in the same patch | 13:28 |
auniyal | yes yes | 13:28 |
sean-k-mooney | the repoduce should only test the specific bug | 13:28 |
auniyal | one patch will be a general odule having all tagging test | 13:29 |
auniyal | and then reproducer | 13:29 |
auniyal | but yeah, if could write either one first next should be easy | 13:29 |
auniyal | so update api version in OSAPIFixture ? | 13:30 |
sean-k-mooney | that or do self.api.microverion=2.43 | 13:30 |
sean-k-mooney | i generally woudl do that instead | 13:30 |
auniyal | ack | 13:31 |
sean-k-mooney | like this https://github.com/openstack/nova/blob/53012f1c55072c42ced267a2b1adef0a669d9f45/nova/tests/functional/regressions/test_bug_1806064.py#L54-L57 | 13:31 |
auniyal | sean-k-mooney, still same error, "tag was unexpexted" | 13:34 |
auniyal | so I removed api_versiona and microversion at class level, and add self.api.microverion=2.43 at setup | 13:36 |
auniyal | after defining self.api | 13:36 |
opendevreview | Stephen Finucane proposed openstack/nova master: Add job to test with SQLAlchemy master (2.x) https://review.opendev.org/c/openstack/nova/+/886230 | 13:40 |
stephenfin | bauzas, sean-k-mooney: I addressed merge conflicts and removed the job from the gate pipeline ^ | 13:53 |
bauzas | stephenfin: saw it, you're great | 14:00 |
bauzas | I definitely want to merge your patch because of the effort you made | 14:00 |
bauzas | stephenfin: could you make it non-voting at first ? | 14:10 |
stephenfin | I'd rather not. There's no point in non-voting jobs because no one looks at them. | 14:12 |
stephenfin | If it breaks our gate and we need to get stuff merged, we can simply disable it until we get around to fixing it. However, outside of a release week, a regression with our SQLAlchemy 2.x support should be prevented from merging. | 14:12 |
dansmith | then we can just wait until after rc1 to merge? | 14:14 |
bauzas | we agreed on the Bobcat support envelope | 14:15 |
bauzas | which is 1.4 | 14:15 |
bauzas | ideally new-style, hence https://review.opendev.org/c/openstack/nova/+/860829/ being actively reviewed | 14:16 |
bauzas | so, yeah, I'm ok with awaiting RC1 to be tagged for the zuul job | 14:16 |
bauzas | yup | 14:17 |
stephenfin | ack | 14:26 |
bauzas | stephenfin: I'll keep my +2 but you can ping me once we deliver RC1 | 14:27 |
bauzas | I'll add this zuul patch into our RC tracking etherpad | 14:27 |
bauzas | wfy ? | 14:27 |
stephenfin | sure | 14:27 |
greatgatsby_ | Hello. Our host aggregates seem to get out of sync with our provider aggregates. There seems to be a `nova-manage placement sync_aggregates` command, but we're confused how they're getting out of sync in the first place. Any suggestions of what could be the cause? This is deployed via kolla-ansible yoga | 14:28 |
greatgatsby_ | we don't have any idea even where to start looking. Searching logs didn't uncover anything | 14:28 |
dansmith | greatgatsby_: is the sync command fixing them? | 14:29 |
greatgatsby_ | it does fix them, but then we'll notice they're suddenly out of sync again. The last time we lost all the provider aggs and couldn't spin up VMs | 14:29 |
dansmith | suddenly out of sync? in what way? | 14:30 |
dansmith | they should only change if you add/remove compute nodes from aggregates | 14:30 |
greatgatsby_ | the host aggs do not match the provider aggs. They do initially, then they keep going out of sync | 14:30 |
greatgatsby_ | we're not removing nodes | 14:31 |
dansmith | there is nothing to synchronize if you're not changing the aggregates... | 14:31 |
dansmith | so I'm not sure what could be going on.. are you doing DB replication and perhaps something is undoing the sync? | 14:31 |
greatgatsby_ | so the aggs that the sync command sync go out of sync with us (to the best of our knowledge) not doing anything to cause that | 14:32 |
greatgatsby_ | DB is just setup how kolla-ansible sets it up, we're pretty hands off with the DB right now | 14:33 |
dansmith | to be clear, the synchronization is 1. make sure all nova aggs exist in placement and 2. make sure host assignments to aggs is mirrored from nova to placement | 14:33 |
dansmith | so if you're not changing aggs or mappings, there is nothing changing that needs to be sync'd | 14:33 |
dansmith | presumably nova looks right and placement looks wrong? | 14:33 |
greatgatsby_ | I agree with that, except we lose the placement aggs. They start of ok, we even have a script that compares the sync, it's all ok for a few days, then some (or all) of the placement aggs get dropped | 14:34 |
greatgatsby_ | I hope I'm using the correct terminology | 14:34 |
dansmith | the aggs are disappearing or the mappings? | 14:34 |
dansmith | (or both)? | 14:34 |
greatgatsby_ | nova is always right, placement disappear | 14:34 |
tobias-urdin | interesting observation, stopping a nova-conductor should call ConductorManager's stop() and wait() which isn't implemented or am I blind? the nova.service part calls that. If I stop nova-conductor while nova-compute is mid report_state or doing a periodic task it would fail with MessageTimeout because in nova-conductor we don't finish processing | 14:36 |
tobias-urdin | before stopping the service? for example we dont stop(), process things until queue is empty (calling wait method)...? perhaps somebody knows before I go digging | 14:36 |
greatgatsby_ | so when we do `openstack aggregate show <agg-name>` that always shows the correct hosts. When we drill into the resource provider aggregates, those go out of sync | 14:36 |
dansmith | greatgatsby_: biab, call | 14:38 |
greatgatsby_ | appreciate the help! I'm just kind of stuck trying to figure out where to even look to figure out what's causing this | 14:39 |
dansmith | tobias-urdin: yeah nova has no safe graceful shutdown procedure that lets RPC finish what it's doing, unfortunately | 14:48 |
dansmith | greatgatsby_: so again, the aggregates themselves are there, it's the host mappings (i.e. which hosts are in which aggregate) that is wrong/ | 14:48 |
dansmith | greatgatsby_: I think you need to look at the placement access log to see if there is something doing aggregate operations in between it being in sync and being out of sync.. to narrow it down to some sort of DB issue or some external actor un-syncing your aggregates | 14:49 |
dansmith | greatgatsby_: I'm assuming you're just using normal libvirt and not anything like ironic? | 14:50 |
greatgatsby_ | correct. We have dev and prod environments, and on prod, 2 hosts were removed from a placement agg but the other 20 or so are still fine. AFAIK nothing unique was done to those computes | 14:50 |
greatgatsby_ | I grepped through the placement logs for the agg uuid or anything related to aggs and didn't see anything. I'll dig some more there though | 14:51 |
greatgatsby_ | we're not using ironic | 14:52 |
tobias-urdin | dansmith: ack good to know, then it's not just me being blind :) out of curiosity does that apply to nova-compute as well, a service stop could interrupt building an instance? (not something one should do but is it possible) | 14:52 |
dansmith | greatgatsby_: okay, also grep for any operations happening on the hosts that get removed, like if their provider is getting deleted and re-added or something | 14:54 |
dansmith | greatgatsby_: changing hostnames (which sometimes happens because of bad DNS) or other things could be causing the computes to delete and re-create their providers in placement, which would have the same effect | 14:54 |
dansmith | tobias-urdin: yep :/ | 14:54 |
dansmith | tobias-urdin: for instance create, best is to disable the host and let it quiesce, but other things could still be started on that compute like a resize. we were just discussing making this better a bit ago | 14:55 |
greatgatsby_ | dansmith: thanks, I'll start looking for that | 14:56 |
greatgatsby_ | greatly appreciate the help! | 14:56 |
dansmith | greatgatsby_: fwiw, someone a few weeks ago mentioned the same sort of thing, but I never saw a RCA, so I'll be interested to hear the findings | 14:57 |
dansmith | that was more just out of sync one time and running the sync fixed it IIRC | 14:58 |
dansmith | certainly possible we've got a bug, but lots of people should be screaming if so | 14:58 |
*** blarnath is now known as d34dh0r53 | 15:02 | |
tobias-urdin | dansmith: ack, thanks for the info! always nice to get educated on specifics | 15:06 |
dansmith | sean-k-mooney: were you going to fix this post check? https://review.opendev.org/c/openstack/nova/+/893540 | 15:30 |
greatgatsby_ | dansmith: I'm going to do an hourly `openstack resource provider list` to a log file. Just to confirm, if something was deleting/re-creating the providers in placement, I would expect to see the uuid change? | 15:45 |
dansmith | greatgatsby_: not necessarily, although I think in yoga that's probably true | 15:46 |
dansmith | worth a shot | 15:47 |
greatgatsby_ | ok - if I find anything I'll be sure to comment back in here | 15:47 |
opendevreview | Dan Smith proposed openstack/nova master: Make our nova-ovs-hybrid-plug job omit cinder https://review.opendev.org/c/openstack/nova/+/893540 | 15:48 |
dansmith | greatgatsby_: you might also just query the DB for resource providers for the "created_at" field and see if any of them look too recent | 15:55 |
opendevreview | Jay Faulkner proposed openstack/nova master: [ironic] Use openstacksdk version with shard support https://review.opendev.org/c/openstack/nova/+/894833 | 15:58 |
greatgatsby_ | dansmith: excellent, thanks | 15:59 |
gmann | dansmith: test_evacuate.sh also need to be updated https://review.opendev.org/c/openstack/nova/+/893540/5/roles/run-evacuate-hook/files/test_negative_evacuate.sh#38 | 16:18 |
dansmith | gmann: I'm not sure it does.. it seems to blow past the failed create just fine | 16:18 |
gmann | dansmith: but that will be run after negative evacuate test pass https://github.com/openstack/nova/blob/master/roles/run-evacuate-hook/tasks/main.yaml#L87 | 16:19 |
dansmith | ah okay, I thought you meant the setup_evacuate_resources.sh which seems to be fine | 16:20 |
dansmith | but yeah got it | 16:20 |
gmann | k | 16:21 |
bauzas | we are one day before RC1 and still a lot of changes in flight mode :/ | 16:21 |
bauzas | even if the gate is not flipping that much, that's still limbo dance | 16:21 |
opendevreview | Dan Smith proposed openstack/nova master: Make our nova-ovs-hybrid-plug job omit cinder https://review.opendev.org/c/openstack/nova/+/893540 | 16:22 |
dansmith | bauzas: things needing review or just waiting to merge? your etherpad seemed like everything was pretty well on its way when I looked | 16:22 |
sean-k-mooney | dansmith: oh i forgot about that am not today but i can look at it tomorrow i guess | 16:22 |
dansmith | sean-k-mooney: already on it | 16:23 |
sean-k-mooney | oh ok thanks | 16:23 |
bauzas | dansmith: everything is accepted except the prelude (which is still on my laptop atm) so just a gate update | 16:27 |
dansmith | bacack | 16:27 |
dansmith | or ack even | 16:27 |
bauzas | bah ack | 16:27 |
bauzas | :p | 16:27 |
opendevreview | Sylvain Bauza proposed openstack/nova master: Add a Bobcat prelude section https://review.opendev.org/c/openstack/nova/+/894940 | 16:34 |
bauzas | JayF: I'm lost in translation, does https://review.opendev.org/c/openstack/nova/+/894833/2 mean that ironic shards won't work until we deliver an openstacksdk release ? | 16:36 |
bauzas | if so, /me is a very sad panda | 16:36 |
gmann | I will also check etherpad if anything new change needed to merge, | 16:36 |
JayF | bauzas: johnthetubaguy and I are on a call right now trying to find alternatives | 16:39 |
JayF | bauzas: I assure you I am the saddest panda | 16:39 |
bauzas | gmann: prelude just arrived, but maybe we should remove the first bullet point items | 16:40 |
bauzas | JayF: we still have time to remove the shards from the highlights and remove the conf options, if you want MHO | 16:42 |
bauzas | or just revert the 3rd patch tbc | 16:42 |
JayF | that is the worst of all options, I think we have a chance of getting a cleaner fix | 16:42 |
bauzas | I can hold RC1 just for that | 16:42 |
bauzas | JayF: a clean fix can't be a client change | 16:42 |
JayF | we are trying to manipulate it to do what we want w/o the client change | 16:43 |
bauzas | it's a long-term solution, but this can't happen in the timeframe we hav | 16:43 |
JayF | that patch is up as the known-working example | 16:43 |
bauzas | well, the latest patchset is just a noop but yeah | 16:43 |
gmann | bauzas: noted, will wait for that | 16:44 |
bauzas | the fact is, I'm surprised we're facing this at the very end, but urgency requires me to do the post-mortem after RC1, not now | 16:44 |
bauzas | but I'd enjoy any very quick fix | 16:45 |
dansmith | bauzas: I think it makes sense to communicate the "not super tested" nature of it if we're going to leave it in the prelude, personally | 16:45 |
bauzas | trust me about it | 16:45 |
dansmith | given what we normally expect for validation of headline features like that | 16:45 |
bauzas | dansmith: yeah, and trust me, I can even remove it from the prelude | 16:45 |
bauzas | actually the highlights worry me much | 16:45 |
bauzas | since they will be used by the marketing folks for the marketing fest | 16:46 |
bauzas | fortunately, this isn't merged yet, I'm gonna put a -W until we clarify the state | 16:46 |
bauzas | JayF: I'll drop now for 2.5 hours but I'll come back | 16:54 |
bauzas | in case you have options | 16:54 |
sean-k-mooney | JayF: bauzas i think we use ironic client of rthe shard stuff no? | 17:12 |
JayF | bauzas: I am convinced a revert is the best path. I'd personally prefer even reverting the peer_list deprecation; but if we leave that deprecation in, can I get some assurance that we won't *remove* peer_list until sharding lands? | 17:12 |
sean-k-mooney | JayF: stephenfin had a seriese ot move ironic to the sdk | 17:12 |
JayF | sean-k-mooney: we use a mix of client and sdk | 17:12 |
sean-k-mooney | ok but we didnt land the changes to move to sdk only | 17:13 |
JayF | sean-k-mooney: which is part of why this was complex to figure out what was going on | 17:13 |
JayF | https://etherpad.opendev.org/p/nova-sharding-rca I've created this | 17:13 |
sean-k-mooney | ok | 17:13 |
JayF | putting notes from our technical research for follow-ups | 17:13 |
sean-k-mooney | so are we using ironic client for this or not? | 17:13 |
JayF | all node fetches in Ironic for node listing use sdk | 17:13 |
JayF | most node updates/writes use client | 17:13 |
bauzas | JayF: so you basically wanna revert the whole series ? | 17:13 |
JayF | but that is a very rough line | 17:13 |
sean-k-mooney | lets not do that just yet | 17:14 |
JayF | bauzas: I think that's what we have to do. We'd need an SDK change, and there's a sniff of a bug in the Ironic API handling too, which is what pushed me to revert | 17:14 |
sean-k-mooney | if the specific call we need are functional then i woudl not revert | 17:14 |
bauzas | sean-k-mooney: we are on the edge of RC1 | 17:14 |
JayF | the entire basis of ironic/nova node sharding is that we have the ability to query nodes limited by shard | 17:14 |
bauzas | and we highlighted to the marketing about the ironic features | 17:14 |
JayF | on ironic side we index on shard to make that a fast query | 17:14 |
JayF | and so if SDK can't add that shard query, doing late filtering or anything like that basically guts the purpose of the change | 17:15 |
dansmith | if there's any question, we should revert for sure | 17:16 |
sean-k-mooney | ok its using the sdk | 17:16 |
sean-k-mooney | https://github.com/openstack/nova/blob/master/nova/virt/ironic/driver.py#L616 | 17:16 |
JayF | Yeah I've been on a zoom w/ johnthetubaguy troubleshooting this since like 7am local time, 3 hours | 17:16 |
JayF | and julia in here too | 17:16 |
dansmith | we should leave the deprecation in place but we probably need to adjust the text that says to use the other thing | 17:16 |
JayF | dansmith: please please lets not remove peer_list until shard lands though :/ | 17:16 |
bauzas | dansmith: jay is even proposing to revert the whole stack | 17:16 |
JayF | bauzas: my care about peer_list is this: we need a single slurp release with peer_list + sharding for migration | 17:17 |
sean-k-mooney | we are not removing peer_list until D | 17:17 |
dansmith | bauzas: right, which is probably best | 17:17 |
JayF | I have no concerns with deprecating peer_list now, but if we remove it in D | 17:17 |
JayF | then SLURP users don't get a migratory release | 17:17 |
sean-k-mooney | yes they do the migratiory release is C | 17:17 |
bauzas | okay, then I'm preparing the reverts | 17:17 |
sean-k-mooney | it will be deprected in B and C and then removed in D C will have shard supprot and peer_list support | 17:18 |
JayF | bauzas: I'm extremely sorry for this, we should've had devstack testing lined up as a prereq | 17:18 |
bauzas | JayF: this is not about being sorry, I prefer we found the bug before we deliver | 17:19 |
bauzas | so thanks for having spotted it | 17:19 |
sean-k-mooney | i would prefer to just using update the docs hostely rather then revert | 17:19 |
JayF | bauzas: that's why I made darn sure I was hooking up devstack before release time | 17:19 |
dansmith | nova should have expected a devstack job showing it working before merging, so the fault lies with us all | 17:19 |
sean-k-mooney | dansmith: also true ... | 17:20 |
JayF | dansmith: I literally just said that on the zoom call I'm still on | 17:20 |
dansmith | yup, I didn't review the later patches so I didn't even know we were missing that | 17:20 |
dansmith | but this is a good demonstration of why we should (and usually do) require such things | 17:21 |
bauzas | agreed | 17:21 |
dansmith | at least we found it before release | 17:21 |
sean-k-mooney | JayF: i assume laning the requried change in sdk was aslo rejected right | 17:22 |
JayF | it's already landed in master | 17:22 |
JayF | but we saw concerning behavior from the /v1/nodes?shard=blah Ironic API endpoing | 17:22 |
JayF | which is why I hit the revert button | 17:22 |
sean-k-mooney | ok but it has not release yet right | 17:22 |
JayF | I want it to land; but I don't want to break people and my confidence in this feature functioning is extremely low | 17:22 |
JayF | sean-k-mooney: That's been out since A | 17:22 |
JayF | sean-k-mooney: so I likely need to validate that broken behavior and fix it | 17:23 |
JayF | sean-k-mooney: or it's possible it's not broken; but I need to check that and I don't think RC is the time to be answering questions like that | 17:23 |
sean-k-mooney | ack | 17:24 |
dansmith | indeed. | 17:24 |
sean-k-mooney | it sound like there are enough things in flight that we wont knwo for a while :( | 17:24 |
opendevreview | Sylvain Bauza proposed openstack/nova master: Revert "Make compute node rebalance safter" https://review.opendev.org/c/openstack/nova/+/894944 | 17:25 |
opendevreview | Sylvain Bauza proposed openstack/nova master: Revert "Add nova-manage ironic-compute-node-move" https://review.opendev.org/c/openstack/nova/+/894945 | 17:25 |
opendevreview | Sylvain Bauza proposed openstack/nova master: Revert "Limit nodes by ironic shard key" https://review.opendev.org/c/openstack/nova/+/894946 | 17:25 |
opendevreview | Sylvain Bauza proposed openstack/nova master: Revert "Deprecate ironic.peer_list" https://review.opendev.org/c/openstack/nova/+/894947 | 17:25 |
dansmith | bauzas: we need to leave the deprecation in place, are you going to do a new one to deprecate with different wording? | 17:25 |
bauzas | dansmith: I thought JayF said "unplug the whole stack since we don't want to deprecate until shards exist" | 17:26 |
dansmith | bauzas: no, we should deprecate now, but not remove until shards is in place | 17:26 |
JayF | My opinion on deprecation is null; my opinion on *removal* is "after there has been at least one SLURP with shard+peer_list in a release together" | 17:27 |
dansmith | yup | 17:27 |
sean-k-mooney | +1 | 17:27 |
bauzas | o ok | 17:27 |
dansmith | bauzas: good with ninja approvals on the reverts right? | 17:27 |
bauzas | dansmith: I think it's even documented in our docs | 17:28 |
sean-k-mooney | i mean i can also just hit them | 17:28 |
sean-k-mooney | bauzas: it is but not really for this | 17:28 |
sean-k-mooney | the fast revert poicly does allow it however | 17:28 |
bauzas | well, this is a broken piece | 17:28 |
bauzas | fast reverts apply to this | 17:28 |
dansmith | bauzas: yep, just confirming.. did the first three | 17:29 |
bauzas | so | 17:29 |
bauzas | I need to look at the deprecation patch | 17:29 |
bauzas | to see the wordings | 17:29 |
dansmith | specifically we need to keep something like "Running multiple nova-compute processes that point at the same conductor group is now deprecated" | 17:30 |
dansmith | and the conf rename and the deprecation reason there | 17:30 |
dansmith | however we do that is fine | 17:30 |
bauzas | I just did read the whole patch | 17:31 |
bauzas | and I think I can abandon my revert | 17:31 |
bauzas | the deprecation just says the truth | 17:31 |
bauzas | dansmith: JayF: remind me my SLURP knowledge | 17:33 |
bauzas | if we say 'we deprecate in B' | 17:33 |
bauzas | that means we can't hardly remove in C, provided ops wouldn't have had the memo | 17:33 |
sean-k-mooney | if we deprecate in be we still have to keep it to have the deprecateion released in C | 17:33 |
dansmith | we also need to deprecate in C and then we can remove in E, assuming C has the shards as usable | 17:33 |
JayF | Yeah, that matches my understanding. | 17:33 |
sean-k-mooney | so first removal is D regardless of when we deprecate | 17:33 |
bauzas | yeah | 17:34 |
dansmith | right, we could remove in D | 17:34 |
dansmith | first slurp it can be missing from is E | 17:34 |
bauzas | we have a couple of deprecations this cycle | 17:34 |
bauzas | https://docs.openstack.org/releasenotes/nova/unreleased.html#deprecation-notes | 17:34 |
sean-k-mooney | yep which is fine | 17:34 |
bauzas | so I'll ensure we don't remove anything next cycle | 17:34 |
sean-k-mooney | we will be removing thing next cycle | 17:35 |
sean-k-mooney | just not those | 17:35 |
sean-k-mooney | we have deprecated things from A and older that we can remove | 17:35 |
sean-k-mooney | that is ok because A was a slurp | 17:35 |
sean-k-mooney | so if it was deprected in A we can remove if it was deprected in B we cant until D | 17:36 |
sean-k-mooney | ok im going to call it a day | 17:36 |
sean-k-mooney | o/ | 17:36 |
bauzas | JayF: dansmith: updated the cycle highlights in order to remove any occurrence of the shards https://review.opendev.org/c/openstack/releases/+/894213 | 17:37 |
bauzas | I'd appreciate if you could quickly review it | 17:38 |
dansmith | bauzas: waaaay ahead of you | 17:38 |
bauzas | :) | 17:38 |
bauzas | okay, I gonna call it a wrap too, but I'll poke around after dinner, since we have a couple of changes upfront in the gate | 17:39 |
opendevreview | Sylvain Bauza proposed openstack/nova master: Add a Bobcat prelude section https://review.opendev.org/c/openstack/nova/+/894940 | 17:42 |
dansmith | bauzas: I've seen this fail twice this morning already, not sure if something changed or not: https://b5ee1ed3653a458879e1-60fa9bbec8248937c3af4b3a8047f40b.ssl.cf2.rackcdn.com/893540/6/check/nova-live-migration/6e12356/testr_results.html | 17:42 |
bauzas | dansmith: yeah that's a known bug | 17:42 |
bauzas | https://bugs.launchpad.net/neutron/+bug/1940425 | 17:43 |
bauzas | it's old but its occurrences raised recently | 17:43 |
dansmith | bauzas: yeah, but I haven't seen it very much in the past few months, but several times today now | 17:43 |
dansmith | okay | 17:43 |
dansmith | critical but unfixed in neutron since 2021..eesh | 17:44 |
bauzas | I haven't pinged lajoskatona ralonsoh nd the other neutron cores | 17:44 |
bauzas | (yet) | 17:44 |
bauzas | dansmith: well, I guess they just don't check the number of critical bugs they have on a weekly basis, that's it :D | 17:45 |
gmann | dansmith: seems we need to put this whole things under cinder check https://review.opendev.org/c/openstack/nova/+/893540/6/roles/run-evacuate-hook/files/test_evacuate.sh#58 | 18:18 |
gmann | it is failing there https://zuul.opendev.org/t/openstack/build/5efc0aaf88874c45b2eb68d8e9cf4a0d/console | 18:18 |
dansmith | gmann: dammit | 18:18 |
dansmith | I was just going to fix one thing, see how it goes, fix another, etc | 18:18 |
dansmith | you keep pointing out all the problems and are making me look bad :) | 18:18 |
opendevreview | Dan Smith proposed openstack/nova master: Make our nova-ovs-hybrid-plug job omit cinder https://review.opendev.org/c/openstack/nova/+/893540 | 18:29 |
lajoskatona | dansmithm bauzas: I think we have now this bug for that issue: https://bugs.launchpad.net/neutron/+bug/2033887 and this is the patch series for that: https://review.opendev.org/q/Ifc2d37e2042fad43dd838821953defd99a5f8665 | 18:43 |
dansmith | lajoskatona: col | 18:45 |
dansmith | er cool even | 18:45 |
gmann | dansmith: you doing so much work and on gate stability makes you always better and not bad :) | 18:54 |
dansmith | heh | 18:54 |
* bauzas just ducks out | 19:01 | |
bauzas | dansmith: fwiw, also seeing more kernel crashes like https://bb94d2825af897cfcd12-f6f1806a4829a343b7540be166a34ea9.ssl.cf5.rackcdn.com/860829/4/check/nova-next/bf91602/testr_results.html | 19:03 |
bauzas | again an already known issue but more failures | 19:04 |
dansmith | bauzas: always with no space left on device? | 19:04 |
bauzas | don't think so | 19:05 |
bauzas | I have to doublecheck tho | 19:05 |
dansmith | that's during boot, so it looks more like maybe a broken image creation or if it's on ceph, maybe a backend problem | 19:05 |
dansmith | oh volume boot, so yeah something wrong with the actual volume | 19:06 |
bauzas | yeah probably a ceph RC | 19:06 |
dansmith | it never mounted and thus couldn't pivot over and the ENOSPC comes from having not mounted something writable on the target | 19:06 |
bauzas | hmpfff | 19:20 |
bauzas | Sep 13 13:49:37.042893 np0035238841 nova-compute[40581]: FileNotFoundError: [Errno 2] No such file or directory: 'multipathd' | 19:21 |
dansmith | always logged by brick AFAIK :( | 19:21 |
bauzas | I wish I would be Neo 'operator, get me all the knowledge about chopters, err. volume bindings in OpenStack" | 19:23 |
bauzas | "Tank, I need a pilot program for B-212 helicopter. Hurry.” | 19:24 |
bauzas | (found the right catchphrase) | 19:24 |
opendevreview | Dan Smith proposed openstack/nova master: Make our nova-ovs-hybrid-plug job omit cinder https://review.opendev.org/c/openstack/nova/+/893540 | 19:25 |
dansmith | gmann: looks like it worked this time, but one fix on the cleanup part | 19:25 |
dansmith | "choppers" or "copters" not "chopters" :P | 19:25 |
bauzas | so the backref thing is probably about to be punted | 19:25 |
bauzas | yeah, damn me, it's late | 19:26 |
bauzas | and when I was young, this was dubbed in French | 19:26 |
bauzas | so, the multipathd error is just a "normal error" ? | 19:27 |
bauzas | that's fun | 19:27 |
JayF | multipathd is only needed for some wacky hardware; we have a similar setup in IPA where we opportunistically load it | 19:29 |
dansmith | by wacky you mean almost anyone using FC :) | 19:31 |
JayF | heh, I wasn't exactly sure what storage tech so you know, wacky hardware ;) | 19:34 |
JayF | nothing we have in CI/gate was more the point :D | 19:34 |
opendevreview | Merged openstack/nova master: Update compute rpc alias for bobcat https://review.opendev.org/c/openstack/nova/+/893744 | 20:08 |
opendevreview | Merged openstack/nova master: Revert "Make compute node rebalance safter" https://review.opendev.org/c/openstack/nova/+/894944 | 21:32 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!