opendevreview | Ian Wienand proposed zuul/zuul-jobs master: test-registry-post: collect k8s logs https://review.opendev.org/c/zuul/zuul-jobs/+/863781 | 01:45 |
---|---|---|
ianw | https://zuul.opendev.org/t/openstack/build/f9a34ce5f5b04f89b269620681f598bc/console failed with ansible-playbook returning '4' ... but there appears to be no error in the actual ansible run | 02:02 |
ianw | the next run passed -> https://zuul.opendev.org/t/openstack/build/8063fd9d6ee4453d8f7a5dabc943528a | 02:48 |
*** yadnesh|away is now known as yadnesh | 04:20 | |
Clark[m] | ianw: I think rc 4 means network issues to the inventory nodes. May have just been a blip? | 04:31 |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: enable-kuburnetes: 22.04 updates https://review.opendev.org/c/zuul/zuul-jobs/+/863810 | 05:22 |
ianw | Clark[m]: yeah, i hope so. Today I slowly merged everything up to the "refer to things via prod_bastion" change and afaics everything is still happy | 05:23 |
*** marios is now known as marios|ruck | 05:59 | |
*** yadnesh is now known as yadnesh|afk | 07:30 | |
*** yadnesh|afk is now known as yadnesh | 08:02 | |
*** jpena|off is now known as jpena | 08:42 | |
*** marios|ruck is now known as marios|ruck|call | 09:00 | |
*** marios|ruck|call is now known as marios|ruck | 09:21 | |
*** soniya29 is now known as soniya29|afk | 09:57 | |
*** dviroel|out is now known as dviroel | 10:16 | |
*** diablo_rojo_phone is now known as Guest702 | 10:39 | |
*** pojadhav- is now known as pojadhav | 11:04 | |
*** yadnesh is now known as yadnesh|afk | 12:32 | |
dtantsur | hey, do I get it right that I cannot use required-projects to clone from github? | 12:42 |
fungi | dtantsur: you can, zuul just needs to know about the repo first | 12:46 |
dtantsur | ah, great, do you have any references? I'm looking at the user guide, but probably a wrong place.. | 12:49 |
fungi | dtantsur: you'll find a bunch of them we've added in its config here: https://opendev.org/openstack/project-config/src/branch/master/zuul/main.yaml#L1415-L1464 | 12:49 |
fungi | if there are some you need, propose the addition with a new change | 12:49 |
dtantsur | fungi: oh, so I cannot do it without modifying the openstack-wide configuration? | 12:50 |
fungi | dtantsur: yes, any repository (even those we host in our gerrit) needs to be listed in zuul's configuration if we want it included | 12:51 |
dtantsur | ack, thank you! Then I'll prototype with a manual git clone first. | 12:51 |
fungi | the main benefits you'll get from inclusion in the config is caching on the executors (so fewer jobs failing due to a network error cloning from github), and the ability to use depends-on to github pull requests | 12:52 |
dtantsur | yep. I just don't want to change the global configuration until we prove that the thing even works. | 12:53 |
*** yadnesh|afk is now known as yadnesh|away | 13:09 | |
*** dasm|off is now known as dasm | 14:11 | |
opendevreview | Dmitriy Rabotyagov proposed openstack/project-config master: Add another role for Zookeeper installation https://review.opendev.org/c/openstack/project-config/+/863158 | 14:22 |
opendevreview | Dmitriy Rabotyagov proposed openstack/project-config master: Add os_skyline repo to CI https://review.opendev.org/c/openstack/project-config/+/863167 | 14:23 |
opendevreview | Dmitriy Rabotyagov proposed openstack/project-config master: Add repository for Skyline installation by OpenStack-Ansible https://review.opendev.org/c/openstack/project-config/+/863165 | 14:23 |
opendevreview | Dmitriy Rabotyagov proposed openstack/project-config master: Add os_skyline repo to CI https://review.opendev.org/c/openstack/project-config/+/863167 | 14:23 |
*** dviroel is now known as dviroel|lunch | 15:08 | |
frickler | clarkb: since there seems to be no more progress in the storyboard mailing thread, I've added the topic to the meeting agenda, maybe johnsom gtema melwitt are interested in joining (tomorrow 19UTC). or maybe have a dedicated talk that is more EU friendly (for me and gtema and possibly sean)? | 15:33 |
gtema | yes, a better slot is appreciated | 15:34 |
johnsom | I am west coast US, so something compatible would be appreciated. | 16:05 |
*** sfinucan is now known as stephenfin | 16:10 | |
JayF | I felt a little weird participating in that thread given Ironic already made the decision in PTG to go from storyboard->LP | 16:12 |
JayF | so I think we're sorta past the decision phase for that project | 16:12 |
JayF | I wonder if others were in the same boat; there's not much of a call to action on that thread if you've already decided to move (back) to LP | 16:12 |
frickler | iiuc clarkb was hoping there would be more interest in keeping storyboard alive, which I don't see happening. for sdk I'd think were mostly at the same point | 16:14 |
frickler | so for me there are two things to discuss: a) is anyone interested in building tooling to help with moving from sb to lp? otherwise only manual moving of issues would be possibly | 16:15 |
clarkb | yes, the call to action was "tell us if you would like to see this better supported" so that we can find a way to do that somehow. But no one has done that | 16:15 |
frickler | b) (more an opendev topic) how long do we want to keep running sb if users are fleeing from it | 16:15 |
fungi | also please avoid dramatic terms like "fleeing" as that's the sort of attitude which has led me to completely avoid reading any of that ml thread so far | 16:17 |
frickler | well that matches my personal feelings about it, sorry if that sounds offensive | 16:17 |
fungi | not offensive, just overtly negative, and i'd rather spend my time dealing with more positive community interactions | 16:19 |
fungi | i only have so much bandwidth for negativity in my day, and try not to reach my quota if it can be helped | 16:20 |
clarkb | fwiw in my email I hinted that after collecting feedback from anyone trying to use storyboard long term and how we can support that that we could followup with supporting alternatives | 16:21 |
clarkb | it does sound like the vast majority of people are looking at lp. | 16:22 |
clarkb | my high level goal was to get the discussions out of individual project comms and into a broader forum so that we could avoid duplication of work and ensure we weren't stepping on each others toes | 16:22 |
clarkb | maybe now is a good time to followup on discussion for those looking to move? and then we can worry about a call later if that is necessary | 16:23 |
frickler | at least for openstack projects LP makes most sense, there is some integration and there are a lot of projects already/still using it | 16:23 |
frickler | so maybe the next question would be: which project like storyboard and would enjoy continuing to use it? | 16:23 |
clarkb | well that was the initial question I posed | 16:24 |
clarkb | I was hoping to find help/support/aid for storyboard before anything else as that has the potential to vastly improve the storyboard situation | 16:25 |
johnsom | I think we have felt the pain of not having all projects on the same platform, so that is the motivator for LP over some other tool. | 16:25 |
clarkb | yes, I think one of he major pieces of feedback for openstack as a whole is that openstack should probably do its best to stay on a single platform | 16:26 |
clarkb | since much of the feedback to that thread indicated this was problematic. It does make me wonder if people aren't aware of LP's external bug tracker linking, but I don't think that would solve all the problems | 16:27 |
johnsom | I also think we have a "build" vs "buy" question here. Is there enough advantage to get support to spin up a zuul like effort or should we just use something that already exists. | 16:27 |
fungi | also the counterargument we've heard often is that people who are employed by canonical's competitors are often disinclined to report or work on bugs for openstack projects because that means they have to sign up for an "ubuntu" account (and this is apparently an emotional topic for some) | 16:28 |
fungi | putting our own central authentication system together addresses that concern for the services we're hosting, but would not solve it for launchpad | 16:29 |
johnsom | Yeah, I think that was one of the main goals of storyboard, to have the foundation logins work. | 16:33 |
*** marios|ruck is now known as marios|out | 16:33 | |
clarkb | johnsom: that definitely something to account for and something that years ago people definitely felt was worhwhile. Part of bringing this up is acknowledging things have shifted (hence individual project planning) and needing to reevaluate. But I was still trying to answer the question fo whether or not there was even interest first | 16:33 |
clarkb | I'm good with accepting no one has expressed interest yet so it is unlikely to occur in the future and taking the discussion from there which is what next. I do think it may be a bit early for a conference call though and we can continue to discusson the mailing list? | 16:34 |
*** dviroel|lunch is now known as dviroel | 16:35 | |
johnsom | Yeah, I don't have a need for a call personally. | 16:35 |
clarkb | it is worth noting that openstack's needs/demands/requirements largely drove the creation and development of storyboard. If openstack isn't pushing for that anymore and isn't providing maintainers/operators and is looking at moving projects back to lp then I think it is important that we reevaluate opendev's caretaker position too | 16:35 |
johnsom | I think what I shared was the top of mind perspective many on the Octavia team have. | 16:36 |
frickler | I don't like mails so I wanted to move to IRC for a bit, but feel free to send mails if you prefer. my main focus was to get things moving again, which it seems I've succeeded in | 16:36 |
clarkb | otherwise openstack will continue to complain at us for something that openstack orphaned and handed over. | 16:36 |
clarkb | frickler: I think it is important to keep this stuff on the mailing list as much as possible so that as many people as possible remain informed and don't feel decisions were made quickly on a conference call (this is the whole reason I pushed the discussion to the mailing list as is felt like openstack was essentially saying we don't want this thing anymore nd not telling everyone | 16:37 |
johnsom | lol, well, I am not a fan of us-or-them, I still feel we are one community. | 16:37 |
clarkb | that was necessary to involve) | 16:37 |
clarkb | johnsom: I agree we are one community, but in this particular case it really feels like people have actively excluded us | 16:37 |
clarkb | and I'm reacting to that | 16:37 |
clarkb | which is why I'm pushing hard to keep discussion on the mailing list. I'll try to write a followup email today | 16:38 |
johnsom | Frankly, many of us were not aware of that email list. | 16:38 |
frickler | well afaict there is no "openstack", it is individual projects discussing things | 16:38 |
clarkb | johnsom: I spent a year telling people to subscribe and no one did and it is listed on opendev.org. | 16:38 |
fungi | less us-vs-them and more now-vs-then... the people who were insistent this solution was worthwhile are no longer around or have changed their minds | 16:38 |
clarkb | frickler: thats fair. But I think that is a problem too | 16:38 |
johnsom | Yaeh, we knew about announce, but this one was missed somehow. | 16:38 |
clarkb | I think it is fine to have the discussions but they shouldn't be hidden away | 16:39 |
johnsom | Agreed | 16:39 |
frickler | I don't agree they are hidden, most discussions are documented well in the respective projects PTG notes | 16:39 |
johnsom | I was commenting on conference calls where details don't always get captured well. | 16:40 |
clarkb | frickler: I was unable to attend any of the discussion and I spent a fair bit of time pre ptg looking over etherpads to make a rough schedule so that I wouldn't miss stuff like this. It is possible that this comes down to adeficiency in how things get scheduled at the ptg. But even then I think discussions like this should go to a mailing list and be more async | 16:40 |
fungi | the concern raised was that different openstack subprojects having the same basic discussions in parallel may end up making different choices if they weren't made aware of those other discussions, and that could lead to even greater divergence | 16:41 |
frickler | that's why I was suggesting IRC meeting, not conference call like zoom or other | 16:41 |
johnsom | The topic wasn't planned in our session, but was added after another project raised the topic. | 16:42 |
frickler | fungi: I tried to cross connect those discussion where I was aware of them | 16:42 |
johnsom | frickler Thank you for the heads up about the email. | 16:42 |
clarkb | anyway I think the thread has shown that there is value in openstack maintaining consistency for bug tracking as one of hte issues people have is the tool split. Given that I think it is even more important we try to centralize the discussion as much as possible and involved as many as possible | 16:42 |
clarkb | to me that means keeping things on the mailing list as long as possible is important | 16:42 |
clarkb | it keeps a public record of discussion and allows people to jump in asnychronously with their input | 16:43 |
frickler | when discussing a common strategy for openstack, using the openstack-discuss would likely be more appropriate though | 16:43 |
clarkb | agreed | 16:44 |
fungi | but initiating the discussion on an opendev mailing list at least gives other non-openstack users of the service more of a chance to see it and weigh in | 16:45 |
fungi | i agree, though, that openstack-specific decisions about it are best handled on openstack-discuss | 16:46 |
clarkb | I'll work on a followup to the thread to try and summarize ^ which is basically "its been a bit with no additional feedback. We've not seen anyone indicate a desire to keep using storyboard and help maintain it. Storyboard was built fr openstack needs/requirements and it seems like those have changed over time. What next for opendev and for openstack?" and we can split that | 16:47 |
clarkb | discussion off into openstack discuss for openstack specifics | 16:47 |
frickler | I also haven't seen any non-openstack responses, which is why I think explicitly addressing those with a direct question "would you like to continue using storyboard?" would also be helpful | 16:48 |
clarkb | ok. I'll incorporate that into my email. | 16:48 |
frickler | maybe also either do a single crosspost or send pointers to other project lists like zuul | 16:49 |
clarkb | well thats the sort of thing I've been trying to avoid. We spent a year doing that and telling people to subscribe to these lists if they wanted to be involved. We also attempted to get project liasons. | 16:51 |
clarkb | I think it is great for openstack to have an openstack specific discussion with an end result that can be fed back to the original discussion but it gets incredibly confusing when people start posting concurrent discussion back and forth with missing emails and so on | 16:51 |
clarkb | Basically at some point I have to stop trying to cross post to the world. | 16:52 |
clarkb | I feel it is incredibly unfair to me in particular to be asked to do that in perpetuity. I did it for about a year with clear warnings I would not continue to do so | 16:53 |
clarkb | it creates a significant amount of busywork on my part | 16:53 |
frickler | another option might be to consider whether it is worth to send a mention of this discussion to service-announce, which seems to have a much larger audience. otherwise some people might only read the announcement of us telling that we can no longer run storyboard, which (trying to be as positive as possible) seems to be the only feasible result if nothing else happens | 16:57 |
clarkb | Yes, that may be worthwile. A single email pointing people to the thread keeps -announce low volume but does potentially make more people aware | 16:57 |
clarkb | I can do that | 16:58 |
clarkb | I can use that as an opportunity to remind people of the three lists we maintain and what their purposes are | 16:58 |
frickler | I'll also point tc-members to what we just spoke, maybe one of them will have interest in taking over organizing the openstack side of the discussion | 17:03 |
slittle | Anyone else having gerrit issues this morning? | 17:27 |
slittle | Error submitting review TypeError: NetworkError when attempting to fetch resource | 17:28 |
clarkb | I've only done a couple reviews, but did not have problems. NetworkErrors can be one of many things. Might help to try and narrow it down (ipv4 vs ipv6 is one more relaible than the other? etc) | 17:29 |
clarkb | frickler: ok followup sent. Now to send the pointer email | 17:30 |
slittle | ipv4 | 17:32 |
frickler | slittle: do you have a complete traceback? | 17:34 |
frickler | (assuming that is part of git-review output) | 17:35 |
slittle | dns and ping to review.opendev.org are ok. review02.opendev.org is answering | 17:36 |
*** jpena is now known as jpena|off | 17:37 | |
slittle | my signin expired while writing my comment on https://review.opendev.org/c/starlingx/kernel/+/863603. | 17:38 |
clarkb | slittle: that may be related to network connectivity issues too | 17:38 |
clarkb | the gerrit UI sometimes interprets network errors as authentication issues. If you refresh (and the network problem isn't persistent) it comes back | 17:39 |
fungi | the string "fetch resource" doesn't appear in the git-review source, so that exception is likely getting raised by one of its dependencies | 17:39 |
slittle | yet, that did it | 17:40 |
fungi | actually i can't find "Error submitting" in the git-review source either | 17:42 |
clarkb | frickler: and now pointer email has been sent | 17:49 |
*** dviroel is now known as dviroel|afk | 19:40 | |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: [wip] enable-kuburnetes: debugging 22.04 https://review.opendev.org/c/zuul/zuul-jobs/+/863810 | 20:21 |
opendevreview | Jay Faulkner proposed openstack/project-config master: Allow Ironic cores to toggle WIP state https://review.opendev.org/c/openstack/project-config/+/863931 | 20:42 |
opendevreview | Michael Johnson proposed openstack/project-config master: Allow Designate cores to toggle WIP state https://review.opendev.org/c/openstack/project-config/+/863932 | 20:50 |
opendevreview | Michael Johnson proposed openstack/project-config master: Allow Octavia cores to toggle WIP state https://review.opendev.org/c/openstack/project-config/+/863934 | 20:55 |
opendevreview | Jay Faulkner proposed openstack/project-config master: Allow Ironic cores to toggle WIP state https://review.opendev.org/c/openstack/project-config/+/863931 | 21:15 |
opendevreview | Michael Johnson proposed openstack/project-config master: Allow Designate cores to toggle WIP state https://review.opendev.org/c/openstack/project-config/+/863932 | 21:18 |
opendevreview | Michael Johnson proposed openstack/project-config master: Allow Designate cores to toggle WIP state https://review.opendev.org/c/openstack/project-config/+/863932 | 21:20 |
opendevreview | Ghanshyam proposed opendev/irc-meetings master: Update TC weekly meeting Day & time https://review.opendev.org/c/opendev/irc-meetings/+/863939 | 21:21 |
clarkb | infra-root heads up I'm working on testing the server rescue paths for regular disk and bfv nodes in vexxhost ca-ymq-1 | 21:23 |
clarkb | I think if that all owrks out I may just need to push a docs update to document the process for us? | 21:23 |
opendevreview | Michael Johnson proposed openstack/project-config master: Allow Octavia cores to toggle WIP state https://review.opendev.org/c/openstack/project-config/+/863934 | 21:24 |
opendevreview | Michael Johnson proposed openstack/project-config master: Allow Designate cores to toggle WIP state https://review.opendev.org/c/openstack/project-config/+/863932 | 21:24 |
clarkb | infra-root this is really interesting. I've rescued a normal disk instance and it booted into my test nodes root / using the rescue instance's kernel | 21:29 |
clarkb | I did not expect that | 21:29 |
clarkb | I did use our test images as the base image for the test node though. Maybe our boot by label is winning? | 21:29 |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: [wip] enable-kubernetes: check pod is actually running https://review.opendev.org/c/zuul/zuul-jobs/+/863810 | 21:29 |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: ensure-kubernetes: move testing into common path https://review.opendev.org/c/zuul/zuul-jobs/+/863940 | 21:29 |
clarkb | (I did that because it made the root disk content distinct enough to know if the rescue was working but there are other ways to check that) | 21:30 |
ianw | so basically that doesn't help us if the root disk init is borked right? | 21:31 |
clarkb | right I think this is undesireable behavior | 21:31 |
clarkb | but it may be that grub is finding our cloudimg labeled disk on the test nodes and it is winning? | 21:32 |
clarkb | I'm going to boot a new test instance based on the cloud provided image and see if the behavior changes | 21:32 |
ianw | that could be the case, if the rescue instance boots with the same kernel args as a regular instance i guess. if both disks are attached, i guess it would find the LABEL=cloudimg...whatever we call it | 21:33 |
ianw | but that wouldn't happen with control plane nodes, though? | 21:33 |
clarkb | yes both appear to be present | 21:34 |
clarkb | and it is finding the rescue images kernel | 21:34 |
clarkb | but that kernel is running with / mounted from what we are trying to rescue | 21:34 |
clarkb | and now I've unrescued and it booted its normal kernel again | 21:35 |
clarkb | (I'm glad I used different ubuntu versions to catch that) | 21:35 |
clarkb | using the regular cloud images produces the same result | 21:41 |
clarkb | the node reboots using the rescue image kernel but / is from the node being rescued | 21:41 |
clarkb | its weird that it can find the rescue image kernel but then somehow selects /dev/vdb1 to mount as / instead of /dev/vda1 | 21:42 |
clarkb | melwitt: mnaser__ ^ that behavior is really surprising to me. Do ya'll know if that is expected from the nova or cloud side of things? | 21:43 |
clarkb | I'm going to test bfv next to see if it does the same thing | 21:44 |
mnaser__ | uh thats weird | 21:45 |
mnaser__ | from nova's pov, it will boot the vm with the rescue image as vda and original as vdb | 21:45 |
mnaser__ | the interaction once it boots isn't controlled, but i suspect the issue is in the label=cloudimg | 21:45 |
clarkb | mnaser__: yup that is what I see in our testing. The weird thing is / is mounted from vdb not vda | 21:46 |
clarkb | mnaser__: ya it looks like both the cloud (vexxhost) provided image and our test node images built with dib set cloudimg-rootfs | 21:47 |
clarkb | and maybe we're seeing behavior where the initramfs is having to pick one of the two? | 21:48 |
clarkb | I wonder if nova needs to take an extra step here to force a device somehow | 21:48 |
melwitt | what version of nova is this? | 21:48 |
clarkb | maybe by attaching the vdb volume post boot | 21:49 |
clarkb | s/volume/device/ (it may not be a volume) | 21:49 |
melwitt | asking bc I wondered if https://specs.openstack.org/openstack/nova-specs/specs/ussuri/implemented/virt-rescue-stable-disk-devices.html could be related | 21:49 |
clarkb | that definitely seems related | 21:50 |
clarkb | melwitt: is that spec essentially proposing that clouds have rescue specific images that can have properties that help ensure the correct behavior? | 21:51 |
melwitt | clarkb: I'm still trying to parse the spec as well ... unfortunately don't know much about rescue. this spec was implemented in ussuri | 21:53 |
clarkb | melwitt: "The rescue root device will be added as the last device in the configuration, but will be marked as bootable for the BIOS, so it takes priority over the existing root device. This relies on KVM/QEMU supporting the “bootindex” parameter, which all supported versions do." that bit doesn't appear to happen here as we have /dev/vdb1 mounted. That implies to me the rescue | 21:54 |
clarkb | image was not added last (but I guess linux may not garuntee the ordering to be stable?) | 21:54 |
clarkb | I think what is happening here is both the rescue image and the "prod" image have label set to cloudimg-rootfs on their respective / then the kernel boot lines for grub say to mount / using that label and its finding one or the other | 21:55 |
clarkb | a fix for that behavior outside of nova may be to use a rescue specific image that mounts something other than label=cloudimg-rootfs | 21:55 |
clarkb | ok I've also now confirmed that you cannot rescue bfv nodes | 22:02 |
clarkb | both of these things seem like problems? I guess with a bfv node you can shut it down, detach the root disk, boot a new instance and then attach the volume for the other node? | 22:03 |
opendevreview | Merged opendev/system-config master: Reference bastion through prod_bastion group https://review.opendev.org/c/opendev/system-config/+/862845 | 22:03 |
clarkb | melwitt: ^ do you have any idea if that sort of process is what nova expects people to do for boot from volume nodes? | 22:03 |
melwitt | clarkb: is this cloud older than ussuri? bc both of those things appeared to have been addressed in ussuri https://specs.openstack.org/openstack/nova-specs/specs/ussuri/implemented/virt-bfv-instance-rescue.html | 22:04 |
melwitt | ok, for bfv the microversion must be passed to get the new behavior https://docs.openstack.org/nova/latest/user/rescue.html | 22:05 |
clarkb | melwitt: I don't know what version the cloud is. This is vexxhost's montreal public cloud region | 22:12 |
clarkb | ok it doens't look like openstackclient exposes that to me. The onyl options are the image to rescue with and the password. It probably could pass in a microversion new enough to do that if the cloud supports it otherwise return an error? | 22:13 |
clarkb | oh in this case the error comes from the cloud. So maybe what the client should do is try the newer microversion if available otherwise fall back to default microversion and pass any errors through? | 22:14 |
melwitt | you passed the microversion like this openstack --os-compute-api-version 2.87 server rescue SERVER | 22:14 |
melwitt | or are you using the old novaclient | 22:15 |
clarkb | no this should be latest openstackclient. I think the issue is I'm doing openstack server rescue --help and that doesn't say anything about microversions. | 22:15 |
clarkb | as an end user if --help on the command I want to run doesn't give me the help I need I think that is a flaw :) | 22:15 |
melwitt | openstackclient historically has defaulted to the lowest microversion, so unfortunately has to be specified like that currently | 22:16 |
clarkb | but also I shouldn't have to specify a microversion if a newer one is necessary to complete the request I've made | 22:16 |
clarkb | the tool should just know that and do it for me | 22:16 |
melwitt | clarkb: you are right, that is a flaw if the help doesn't have that info | 22:16 |
clarkb | does anyone know if you can do discovery through the openstack client? Looks like catalog show and list give me the default microversion info (2.1) | 22:19 |
melwitt | I don't think it's possible ... but I checked the doc and found you can at least see what range is available per service with 'openstack versions show' | 22:26 |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: [wip] enable-kubernetes: check pod is actually running https://review.opendev.org/c/zuul/zuul-jobs/+/863810 | 22:26 |
clarkb | I discovered too that you can fetch the root of the nova api endpoint from the catalog (just drop the version specific stuff) and it gives yousimilar info | 22:27 |
*** dasm is now known as dasm|off | 22:27 | |
clarkb | I used version 2.88 since that is the newest supported | 22:29 |
clarkb | and no error now | 22:29 |
clarkb | however that then failed with an error server showing the instance shows. "cannot be rescued: Driver Error: Cannot access storage file" and a info with paths and uids and stuff | 22:30 |
clarkb | unrescue also fails because the instance is in an error state | 22:31 |
clarkb | mnaser__: ^ should I try deleting the instance (60d798a1-f7f5-4c65-8711-7654040ca180) or is that something you might be interested in looking at and I should leave it be? | 22:32 |
mnaser__ | clarkb: if you can send me a paste of the error and you can wipe it after | 22:32 |
clarkb | the instance deleted, but the volume did not. I can't remember if --boot-from-volume XY implies a delete on instance deletion volume or not so this may be expected. | 22:38 |
*** cloudnull6 is now known as cloudnull | 22:38 | |
* clarkb manually deletes it | 22:38 | |
clarkb | I believe all the resources I created to test this stuff have been cleaned up now. And we learned some good info | 22:41 |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: [wip] enable-kubernetes: check pod is actually running https://review.opendev.org/c/zuul/zuul-jobs/+/863810 | 23:19 |
clarkb | looks like the snapd removal change ladned at some point | 23:38 |
clarkb | I guess no news is good news on that one :) | 23:38 |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: [wip] enable-kubernetes: check pod is actually running https://review.opendev.org/c/zuul/zuul-jobs/+/863810 | 23:42 |
clarkb | I'm just about ready to send out the meeting agenda. Anything else to add to it? | 23:43 |
opendevreview | Ian Wienand proposed zuul/zuul-jobs master: [wip] enable-kubernetes: check pod is actually running https://review.opendev.org/c/zuul/zuul-jobs/+/863810 | 23:52 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!