admin1 | hi all | 04:12 |
---|---|---|
admin1 | my instances are all on shutoff state after a reboot | 04:12 |
admin1 | and they are not booting up | 04:12 |
admin1 | any pointers on how to start them | 04:12 |
admin1 | in the database/gui , they are shown as active | 04:12 |
opendevreview | Ghanshyam proposed openstack/nova master: Move rule_if_system() method to base test class https://review.opendev.org/c/openstack/nova/+/824475 | 05:31 |
opendevreview | Ghanshyam proposed openstack/nova master: Server actions APIs scoped to project scope https://review.opendev.org/c/openstack/nova/+/824358 | 05:47 |
*** tbachman_ is now known as tbachman | 07:31 | |
*** hemna8 is now known as hemna | 07:38 | |
gibi | chateaulav: no worries. I should have notice that it was at the wrong place | 07:50 |
*** bhagyashris_ is now known as bhagyashris | 07:52 | |
*** bhagyashris__ is now known as bhagyashris | 08:07 | |
*** bhagyashris_ is now known as bhagyashris | 09:20 | |
sean-k-mooney[m] | admin1 there is a config option to resume guests on host reboot | 09:22 |
sean-k-mooney[m] | but if the system has been up for a while and the periodic task has run it proably has updated the db to mark them of shutdown by now | 09:23 |
sean-k-mooney[m] | chateualav im +1 on your spec but i have a few comments if you adress them im +2 | 09:53 |
sean-k-mooney | bauzas: do you have time to revisit https://review.opendev.org/c/openstack/nova-specs/+/824191 | 10:52 |
sean-k-mooney | kashyap: i might jsut fix the nits in https://review.opendev.org/c/openstack/nova-specs/+/824053 myself assuming you are not working on them? | 10:53 |
kashyap | sean-k-mooney: Mornin | 10:53 |
sean-k-mooney | morning :) | 10:53 |
kashyap | sean-k-mooney: Sorry, lemme just do it right away | 10:54 |
sean-k-mooney | cool we can review it quickly when you push and reappove | 10:54 |
kashyap | sean-k-mooney: Why does Sylvain suggest "Previous-approved : Yoga" in the commit message? Isn't it *for* Yoga? | 10:57 |
sean-k-mooney | they ment xena | 10:57 |
sean-k-mooney | if its just a reapproval we typicaly refernece the last release it was apprved for but that is not strictly requried | 10:58 |
sean-k-mooney | https://review.opendev.org/c/openstack/nova-specs/+/799096 is the most recently approved version | 10:58 |
kashyap | Ah, nod | 10:58 |
sean-k-mooney | which had Previous-approved: Train | 10:59 |
kashyap | Yep, fixin | 11:00 |
opendevreview | Kashyap Chamarthy proposed openstack/nova-specs master: Repropose "CPU selection with guest hypervisor consideration" https://review.opendev.org/c/openstack/nova-specs/+/824053 | 11:08 |
kashyap | sean-k-mooney: --^ | 11:08 |
sean-k-mooney | bauzas: i have a pep8 issue in the spec anyway. i need to wrap the example output in a code block to make sphinx happy | 11:21 |
sean-k-mooney | bauzas: so ill just adress you comments now | 11:21 |
*** bhagyashris_ is now known as bhagyashris | 11:21 | |
sean-k-mooney | ill also drop the log filter stuff for now | 11:21 |
*** bhagyashris_ is now known as bhagyashris | 11:45 | |
bauzas | sean-k-mooney: all good, ping me when you're done | 11:53 |
bauzas | sean-k-mooney: and I was on https://review.opendev.org/c/openstack/nova-specs/+/824191:) | 11:53 |
sean-k-mooney | im more or less done just tweakign some formating to make it render nicer | 11:54 |
opendevreview | Dmitrii Shcherbakov proposed openstack/nova master: [yoga] Add PCI VPD Capability Handling https://review.opendev.org/c/openstack/nova/+/808199 | 11:57 |
opendevreview | sean mooney proposed openstack/nova-specs master: add per process healthcheck spec https://review.opendev.org/c/openstack/nova-specs/+/821279 | 12:05 |
sean-k-mooney | bauzas: ^ | 12:05 |
sean-k-mooney | bauzas: also kashyap spec https://review.opendev.org/c/openstack/nova-specs/+/824053 | 12:06 |
*** dasm|off is now known as dasm | 12:08 | |
opendevreview | Merged openstack/nova-specs master: lightos volume driver spec https://review.opendev.org/c/openstack/nova-specs/+/824191 | 12:20 |
opendevreview | Iago Filipe proposed openstack/nova master: Remove deprecated opts from VNC conf https://review.opendev.org/c/openstack/nova/+/824478 | 12:41 |
opendevreview | Jonathan Race proposed openstack/nova-specs master: Adds Pick guest CPU architecture based on host arch in libvirt driver support https://review.opendev.org/c/openstack/nova-specs/+/824044 | 13:23 |
chateaulav | sean-k-mooney: thanks for the feedback, changes have been made. | 13:26 |
chateaulav | patchset 6 submitted | 13:26 |
sean-k-mooney | chateaulav: +2 from me gibi when you have time can you revisit | 13:36 |
gibi | on my list for tofay | 13:39 |
sean-k-mooney | thanks. i have pushed the latest version fo the health check spec. im going to work on something else for a bit but if anyone has question ping me or leave them inline in the spec an ill try to be reponsive to them | 13:42 |
bauzas | sean-k-mooney: looking at your last rev for the hc | 13:48 |
bauzas | should be an easy peasy | 13:48 |
bauzas | chateaulav: you're my next jab | 13:49 |
opendevreview | Merged openstack/nova-specs master: Repropose "CPU selection with guest hypervisor consideration" https://review.opendev.org/c/openstack/nova-specs/+/824053 | 14:07 |
gibi | sean-k-mooney: ack, you are on my list too (just meeeetings) | 14:09 |
*** akekane_ is now known as abhishekk | 14:29 | |
opendevreview | Merged openstack/nova-specs master: Adds Pick guest CPU architecture based on host arch in libvirt driver support https://review.opendev.org/c/openstack/nova-specs/+/824044 | 14:40 |
gibi | sean-k-mooney: I have couple of comment in the health check spec https://review.opendev.org/c/openstack/nova-specs/+/821279 feel free to ping me if clarification is needed | 14:51 |
sean-k-mooney | i will take a look now | 14:51 |
gibi | ok | 14:51 |
sean-k-mooney | regarding /health | 14:52 |
sean-k-mooney | those are referign to two differnrt api endpoint | 14:52 |
sean-k-mooney | the first isntance is saying i will expose /helth for the new tcp endpoint added by this spec | 14:52 |
sean-k-mooney | the second section in rest api impact | 14:52 |
gibi | OK, I see | 14:53 |
sean-k-mooney | is saying i will not expose it in the nova-api endpoint | 14:53 |
gibi | make sense | 14:53 |
sean-k-mooney | regardign the versioning | 14:54 |
sean-k-mooney | if it woudl make you feel better i can just bump the version everytime we add a posible check | 14:54 |
sean-k-mooney | e.g. the minior version | 14:54 |
sean-k-mooney | that would make it more consitnet | 14:54 |
gibi | I'm good any ways iff the rules of bumping is clear | 14:54 |
sean-k-mooney | i was jsut goign to have the keys of the checks sub dictionay unversioned | 14:54 |
gibi | that is fine | 14:55 |
gibi | then lets state taht | 14:55 |
gibi | that | 14:55 |
gibi | checks is unversioned the rest is semver | 14:55 |
sean-k-mooney | well that the thing i want the value of the checks to be versioned | 14:55 |
sean-k-mooney | just not the keys | 14:55 |
sean-k-mooney | so you can rely on the format of the checks | 14:55 |
sean-k-mooney | but the set of names is open | 14:56 |
gibi | aah | 14:56 |
sean-k-mooney | if we want to version everything in the responce the only delta form what i have now is the set of check names would be versioned too | 14:56 |
sean-k-mooney | which i can do but im not sure what all of them are yet | 14:56 |
gibi | then I would vote for versioning the set of check names too | 14:57 |
sean-k-mooney | ok | 14:57 |
sean-k-mooney | ill make that update and fix the typo | 14:57 |
gibi | OK, thanks | 14:57 |
gibi | I will +2 it | 14:57 |
opendevreview | sean mooney proposed openstack/nova-specs master: add per process healthcheck spec https://review.opendev.org/c/openstack/nova-specs/+/821279 | 15:08 |
sean-k-mooney | gibi: there you go ^ | 15:08 |
sean-k-mooney | if we are generally aligned on that i might adress any other nits if there are some in a followup | 15:09 |
gibi | sean-k-mooney: I'm +2 | 15:10 |
gibi | thank you | 15:10 |
sean-k-mooney | no worries :) | 15:10 |
sean-k-mooney | i proably shoudl start implementing that soon likely next week | 15:10 |
bauzas | ++ | 15:10 |
bauzas | if nothing controversial, gibi please +2/+W it then | 15:11 |
bauzas | or I can jab this spec again | 15:11 |
sean-k-mooney | bauzas: if you can that would be great | 15:11 |
gibi | I will go back to it before I leave today and +W it if bauzas didn't yet | 15:12 |
sean-k-mooney | TheJulia: given today is spec freeze are you plannign to rework https://review.opendev.org/c/openstack/nova-specs/+/815789 | 15:13 |
bauzas | sean-k-mooney: oki doki, hold your brace | 15:13 |
TheJulia | sean-k-mooney: I didn't create that and I've never seen it before | 15:14 |
sean-k-mooney | oh its the spec for https://review.opendev.org/c/openstack/nova/+/813897 no? | 15:15 |
TheJulia | sean-k-mooney: I've been treating the issue as a bug but I've been beyond slammed as of recent | 15:15 |
sean-k-mooney | ok we might want to put this intout the compute team backlog downstream and take it over to land it next cycle | 15:16 |
sean-k-mooney | given we are chaign the api behavior and rebalaicne semantic upstream we were considering it a feature | 15:16 |
TheJulia | well, that is what it calls for | 15:17 |
TheJulia | at least, a glance at the spec | 15:17 |
TheJulia | the bug can be solved by just doing the needful and not changing the API | 15:17 |
sean-k-mooney | its not changing the api syntatically | 15:18 |
sean-k-mooney | i guess the sidefffect is not nessisaly visable | 15:18 |
sean-k-mooney | so you could arge it snot changing it semantically too | 15:18 |
TheJulia | yeah, essentially invisible unless someone goes looking for it and even then it could have been a migration in a vm context for all anyone looking at the api knows | 15:18 |
sean-k-mooney | if other are ok with this as a pure bugfix and we change your initall patch to look at the dissabled state not up/down then i coudl buy that | 15:19 |
TheJulia | I think it was already doing that, but I've litterally forgotten exactly what is in the patch since I've not been able to pull it back up since November | 15:19 |
sean-k-mooney | ack i think you didnt in v1 but ill adming i simarly dont know whats in v5 | 15:20 |
TheJulia | or disabled was contextually redundant to the data on hand or something like that | 15:20 |
sean-k-mooney | well one of the concers we hat was making sure this only happend if the operator opted in | 15:21 |
TheJulia | I was also going for keeping it as light weight on the DB as possible given multi-thousand node clusters will make things cry regardless | 15:21 |
sean-k-mooney | and the way to do tha was to delete the failed service or disable it | 15:21 |
TheJulia | Yeah, and I think the operator opt-in is where I never got into | 15:21 |
TheJulia | opt-in in general is kind of crap for our end users since they need to be aware and otherwise they are exposed to... not great behavior otherwise | 15:22 |
* TheJulia shrugs | 15:22 | |
sean-k-mooney | right but arbitary chaning the isntace.host effectivly at anytime is not somethign we wanted to allow either | 15:22 |
sean-k-mooney | we were more or less ok with it in a failure or mantaince event | 15:22 |
TheJulia | it is not actually that arbitrary, it would be during a hash ring rebalance which is not an arbitrary event in the first place | 15:23 |
TheJulia | which would have been triggered by a failure or maintenance event | 15:24 |
sean-k-mooney | true im a little worried about what happens in the case of a network partion but ill try and review this again and see wht athe current patch does | 15:24 |
TheJulia | maybe we're using the same words with different causes in mind | 15:24 |
TheJulia | computes don't directly interact with the db do they? | 15:25 |
TheJulia | in terms of open socket to it ? | 15:25 |
sean-k-mooney | no they connect via the conductor | 15:25 |
sean-k-mooney | so they make rpcs which the conductor the exectues against the db | 15:25 |
TheJulia | so whatever has conductor connectivity would win and ultimately the partitioned cluster would cease to really function but still be able to carry the requests if they get them via another route | 15:25 |
TheJulia | that is as long as it doesn't hit the db | 15:26 |
sean-k-mooney | well the case i was thinking of is one of the compute cant conenct to rabbit teperaly and the heartbeat expires | 15:26 |
TheJulia | at which point deadlocked compute process | 15:26 |
sean-k-mooney | it gets marked as down | 15:26 |
sean-k-mooney | you starte to reblance and it heartbeats and is marked as up | 15:26 |
TheJulia | well, the object access is over rabbit yes? | 15:27 |
sean-k-mooney | we dont want to continually reblance in that case when its flapping | 15:27 |
TheJulia | wouldn't the process just halt on the db queries through rabbit in that case? | 15:27 |
TheJulia | since they are remoted objects | 15:27 |
sean-k-mooney | it depnes on what happening | 15:28 |
* bauzas just catching up the convo | 15:28 | |
sean-k-mooney | btu the compute serivce if the rabbit connection drop woudl continue runnign normally and then if it needed to do somethign with the conductor it would retry | 15:28 |
sean-k-mooney | if the connection to rabbit is restablished the service will continute to work as normal | 15:29 |
TheJulia | yes, but if we're pulling a list of things or even a reference of an object through rabbit to satisfy even the existing logic, then wouldn't that compute be stuck at that point anyway until rabbit connectivity is re-established and new requests can be picked up off the message bus? | 15:29 |
TheJulia | yes, I think we're on the same about connectivity with rabbit | 15:30 |
TheJulia | err, on the same page | 15:30 |
sean-k-mooney | so ideally if this drops and reconnects in the space of say 5 seconds | 15:31 |
sean-k-mooney | that shoudl not trigger a rebalnce | 15:31 |
sean-k-mooney | with the patch as wrttien i think we coudl get unlucky in that case yes? | 15:31 |
TheJulia | no, is the alive balance a remote object save to the db every 30 seconds? | 15:31 |
sean-k-mooney | am yes i think that is the default interval | 15:32 |
TheJulia | so loosing connectivity briefly wouldn't cause the machine to drop from the list returned from the db | 15:32 |
TheJulia | unless the conductor is watching like a hawk and overriding the table contents | 15:32 |
sean-k-mooney | in the gneral case yes. if its a littel more protacted then it would | 15:33 |
TheJulia | yes, but at which point that compute is out of service or unable to be used | 15:33 |
sean-k-mooney | yep | 15:34 |
TheJulia | I know in ironic that we don't drop it from the working list until after 3 failures have occured, but I'd need to check the code in nova | 15:34 |
sean-k-mooney | i think its similar | 15:34 |
sean-k-mooney | we dont mark it ad down until multiple heart beats are missed | 15:34 |
TheJulia | which means it wouldn't be disqualified I think until 90 seconds have actually passed | 15:34 |
sean-k-mooney | i think its also 3 | 15:34 |
sean-k-mooney | ya | 15:34 |
sean-k-mooney | how long does rebalancing typically take | 15:35 |
sean-k-mooney | is it second or minutes? i assume the former | 15:35 |
TheJulia | seconds for the actual mapping at worst, the actual updates and iteration through is the painful part | 15:35 |
sean-k-mooney | without using disables as a distiubed lock | 15:35 |
sean-k-mooney | how do you prevent the compute when it compes back form racing iwth the reblance | 15:36 |
sean-k-mooney | that was the other usecase for it | 15:36 |
TheJulia | I don't think we've ever measured a startup rebalance on a large site in ironic but I've generally heard of a couple minutes | 15:36 |
TheJulia | sean-k-mooney: the code, as I remember it would check to see if the prior compute is back and then breakout of the rebalanance apply loop if so | 15:37 |
TheJulia | since surely in a little bit, it was going to undo some of what it just did since the hash ring is back to what it was prior to the failure | 15:37 |
sean-k-mooney | that makes sense | 15:38 |
TheJulia | a "oh, the universe just changed on us. stop what we're doing!" | 15:38 |
TheJulia | check | 15:38 |
sean-k-mooney | yep but it need to also roleback so liekly everthing need to be in one big transaction | 15:38 |
sean-k-mooney | so we either get the rebalanced sate or the old state | 15:39 |
TheJulia | a giant transaction would acutally harm the ability to understand what is going on because it also checks current record state so if that is effectively hiding in a giant transaction which will take a minute or two to commit, then we're introducing ourselves to more issues | 15:43 |
sean-k-mooney | it should not take minutes to commit | 15:44 |
sean-k-mooney | we shoudl just compute the desired endstate then comiit the change in one trasaction | 15:44 |
TheJulia | a couple thousand specific column field updates in a heavily used table? | 15:44 |
sean-k-mooney | you just need to use a case statement to update the instance.host on all instance in one go | 15:44 |
TheJulia | eh... then again performance has changed drastically since the days I did that often manually | 15:45 |
sean-k-mooney | so i think its just one colume on one db table with many rows | 15:46 |
sean-k-mooney | that shoudl be quick | 15:46 |
TheJulia | eh, last time I did something like that manually like 4 thousand rows in a 100k row table it was a couple minutes, but again, that was like a decade ago | 15:46 |
TheJulia | Anyway, I'd need to double check a few things but a transaction, at least to me seems more harmful in that the compute node's rebalance is hidden state so we could get a bunch of conflicting transactions pile up, all fail past the first one, and trash to try and correct some of it as time goes on consistency wise | 15:47 |
sean-k-mooney | well if we dont have a transaction you will need to role back the partially reblanced state | 15:49 |
sean-k-mooney | manually right | 15:49 |
sean-k-mooney | or trigger a full rebalnace again | 15:50 |
sean-k-mooney | to even out the load on the compute services | 15:50 |
opendevreview | Merged openstack/nova-specs master: add per process healthcheck spec https://review.opendev.org/c/openstack/nova-specs/+/821279 | 15:50 |
TheJulia | well... a transaction could work in try to generate the updates and then only check if things have changed before committing it if things have not changed, but I'd need to dig into the dbapi internals since to do so likely sounds like rpc functionality would be required to batch it all up properly outside of the normal transactions oslo_db drives | 15:52 |
TheJulia | since we're doing it one at a time instead of a bulk change | 15:52 |
sean-k-mooney | we would need to have a db method for this yes not just loop over the isntance set the filed and call save | 15:54 |
TheJulia | I guess I'm worried about trying to overthink it and then over engineer it, or if a model of eventual consistency is a happier place to be. I'm very much on the eventual consistency mindset since for ironic, it *really* doesn't matter which node proxies the request as long as it is online | 15:54 |
TheJulia | it all matters for rpc request routing ultimately | 15:54 |
sean-k-mooney | i wonder if for ironic it would be better to have a single shared topic queue | 15:54 |
TheJulia | what would that change/gain us in this situation? | 15:55 |
sean-k-mooney | so that all the compute shared one queue and any of them could deque it | 15:55 |
sean-k-mooney | it would mena the isntance.host woudl not matter anymore | 15:55 |
TheJulia | hmm, that could entirely do away with the hash ring | 15:55 |
TheJulia | well, kind of | 15:55 |
TheJulia | hmmmm it would have to know it is ironic | 15:56 |
TheJulia | but that could actually be navigated upgrade wise too | 15:56 |
TheJulia | that doesn't fix the UX issues which are ultimately bugs in past releases though | 15:56 |
sean-k-mooney | yes but we do know the hypervior type at least on the ocmpute node side not sure about the instnace object | 15:56 |
TheJulia | I don't think it is on the instance object | 15:57 |
TheJulia | but again, I've not looked at its structure in a while | 15:57 |
sean-k-mooney | well the way to hack it is to decied instance.host would always be set to "ironic" or simiarl for ironci contoled instnaces | 15:58 |
sean-k-mooney | anyway the impartnat thing is you still care about fixing this issue | 15:58 |
sean-k-mooney | but dont curretnly have time to work on it | 15:58 |
sean-k-mooney | so we shoudl consider if we "redhat comptue team" have capsity to help this cycle or next | 15:59 |
sean-k-mooney | we coudl land your patch as is but im not sure long term its the best approch but it would stop the bleeding in the short term | 16:00 |
sean-k-mooney | which is the whole perferect is the enemy of good enough argument | 16:00 |
opendevreview | Balazs Gibizer proposed openstack/nova master: DNM: trigger nova-next with new tempest test https://review.opendev.org/c/openstack/nova/+/824607 | 16:40 |
TheJulia | sean-k-mooney: yeah. :( | 16:43 |
TheJulia | sorry, been distracted looking at a blocker issue | 16:43 |
admin1 | hi all .. how to figure out what puts messages in versioned_notifications.info ..but there are no consumers | 16:53 |
admin1 | so the queue just grows and grows .. and then i have to manually delete it | 16:54 |
gibi | admin1: you can disable notifications | 16:54 |
admin1 | gibi, from where/how ? | 16:55 |
gibi | sec... | 16:55 |
gibi | admin1: https://docs.openstack.org/oslo.messaging/latest/configuration/opts.html#oslo_messaging_notifications.driver | 16:56 |
gibi | admin1: so in the nova service configuration files | 16:56 |
gibi | admin1: set [oslo_messaging_notifications]driver=noop | 16:56 |
sean-k-mooney | admin1: its disabled by default | 16:59 |
gibi | sean-k-mooney: I don't think so | 16:59 |
sean-k-mooney | the noop driver is the defult | 16:59 |
gibi | sean-k-mooney: I think the default is messaging_v2 | 16:59 |
sean-k-mooney | i dont think so | 16:59 |
gibi | hm | 16:59 |
gibi | interesting | 16:59 |
gibi | according to the doc you are right | 16:59 |
gibi | I alwas remembered it is enable by default | 16:59 |
sean-k-mooney | devstack enables it by default | 17:00 |
sean-k-mooney | and it also defaluts to unversioned if i remember | 17:00 |
gibi | yes, the unversioned is the default | 17:00 |
gibi | we never switched it to versioned | 17:00 |
sean-k-mooney | we really should | 17:00 |
sean-k-mooney | and eventurlly remove the deprecated unversioned notifications | 17:01 |
sean-k-mooney | they have been deprecated since mitaka | 17:01 |
gibi | sean-k-mooney: the problem that there are openstack services using the unversioned | 17:01 |
gibi | so we would migrate them first | 17:01 |
sean-k-mooney | ya but honestly we should propose that a a cross projectr goal | 17:01 |
sean-k-mooney | the problem really haing people to do the work | 17:02 |
sean-k-mooney | maybe a topic of next ptg | 17:02 |
sean-k-mooney | do we know which ones rely on it | 17:02 |
gibi | I have to dig | 17:03 |
gibi | ceilometer is one I'm sure | 17:03 |
sean-k-mooney | i support cilometer, cloud kitty, heat or masikari are the only ones that might be impacted | 17:03 |
sean-k-mooney | maybe watcher | 17:03 |
sean-k-mooney | as long as we are not adding any new unversion notification i guess it does not hurt use too much | 17:04 |
gibi | I agree, it does not hurt | 17:04 |
gibi | and we have a test in place to forbid new unversioned notifications | 17:05 |
bauzas | yuval: around ? I don't see a blueprint filed for https://review.opendev.org/c/openstack/nova-specs/+/824191 | 17:10 |
yuval | I am here | 17:10 |
bauzas | yuval: could you please create one using the link you gave, ie. https://blueprints.launchpad.net/nova/spec/nova-support-lightos-driver | 17:11 |
yuval | yes I created blueprint | 17:11 |
bauzas | ? | 17:11 |
yuval | just a second | 17:11 |
bauzas | ok, then the URL is wrong | 17:11 |
bauzas | we can change it | 17:11 |
bauzas | yuval: found it https://blueprints.launchpad.net/nova/+spec/nova-support-lightos-driver | 17:12 |
*** Uggla is now known as Uggla|afk | 17:12 | |
bauzas | oh, yeah, I see | 17:12 |
bauzas | s/spec/+spec | 17:12 |
yuval | https://blueprints.launchpad.net/nova/+spec/nova-support-lightos-driver | 17:12 |
bauzas | yuval: could you create a change fixing the URL in https://review.opendev.org/c/openstack/nova-specs/+/824191/6/specs/yoga/approved/lightos_volume_driver.rst#11 ? | 17:13 |
bauzas | yuval: I'll fast approve it | 17:13 |
yuval | https://blueprints.launchpad.net/nova/spec/nova-support-lightos-driver | 17:13 |
yuval | this is currently in the spec | 17:13 |
sean-k-mooney | yep | 17:13 |
yuval | its missing the "+" | 17:13 |
sean-k-mooney | its missing the + | 17:13 |
sean-k-mooney | yep | 17:13 |
yuval | before the spec | 17:13 |
bauzas | yup, please update the link in the spec | 17:13 |
yuval | just a sec | 17:14 |
bauzas | (18:12:46) bauzas: s/spec/+spec | 17:14 |
bauzas | in the rst file | 17:14 |
opendevreview | yuval proposed openstack/nova-specs master: lightos- fix blueprint url https://review.opendev.org/c/openstack/nova-specs/+/824633 | 17:34 |
yuval | sorry for the wait | 17:34 |
yuval | had some git issues | 17:34 |
bauzas | yuval: thanks and no worries | 17:35 |
opendevreview | Ghanshyam proposed openstack/nova master: Update centos 8 py36 functional job nodeset to centos stream 8 https://review.opendev.org/c/openstack/nova/+/824637 | 18:06 |
opendevreview | Rajat Dhasmana proposed openstack/nova master: WIP: Add support for volume backed server rebuild https://review.opendev.org/c/openstack/nova/+/820368 | 18:17 |
opendevreview | Merged openstack/nova-specs master: lightos- fix blueprint url https://review.opendev.org/c/openstack/nova-specs/+/824633 | 18:20 |
opendevreview | melanie witt proposed openstack/nova master: Remove deprecated opts from VNC conf https://review.opendev.org/c/openstack/nova/+/824478 | 19:22 |
*** lbragstad2 is now known as lbragstad | 19:39 | |
*** dasm is now known as dasm|off | 21:45 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!