janders | question: is it possible to do "live" development on a booted IPA? If so, do I need to reload IPA service to make changes take effect? Trying to debug a couple things, it would be much quicker this way than if I keep coding locally, push the change, rebuild IPA and reboot the node into the new image... | 00:07 |
---|---|---|
janders | ^ and importantly how do I reload IPA service without making the whole thing fall over? | 00:07 |
JayF | There used to be a `--standalone` flag for IPA, which made it not connect to the Ironic API, and before agent tokens, was useful for developing cleaning/deploy steps | 00:12 |
JayF | nowadays, I don't know what folks do for that kinda dev | 00:13 |
JayF | but restarting IPA service on the ramdisk is *never* going to work properly :( | 00:13 |
janders | JayF noted, thank you. Two follow-up questions: 1) Would kill -HUP be any good? 2) I suppose if I edit hardware.py, it likely won't have the effect on the running service right? | 00:19 |
JayF | Nope :( | 00:19 |
JayF | You're trying to troubleshoot something in a hardware manager? | 00:20 |
janders | I'm trying to finish https://review.opendev.org/c/openstack/ironic-python-agent/+/818712 | 00:20 |
JayF | You can try to emulate standalone mode thru code edits; remove the callbacks to ironic, and checks for the agent token, then you should just be left with an IPA bringing up an API you can call (as if you were Ironic) | 00:20 |
janders | couple bits aren't working and if I could tweak the code on a live system it would make the development cycle much shorter | 00:21 |
janders | but it is likely basic stuff so trying to create a framework for this is likely harder than the problem itself | 00:21 |
janders | I was meaning to finish this patch long time ago but downstream duties took priority | 00:22 |
JayF | the other option, just get on a box you can test it on | 00:22 |
JayF | load it into a pythonpath | 00:22 |
JayF | and call the method(s) in question from the python cli | 00:22 |
JayF | but that gets dangeresque when you're working on an app designed to mutilate your data | 00:22 |
janders | :D yeah | 00:23 |
janders | I can think of ways but again likely harder than the problem I am trying to fix | 00:23 |
janders | (if I booted this off a read-only NFS on a test node - and were tweaking code on the NFS server that would likely do what I want) | 00:24 |
janders | and the code won't harass ro devices obviously | 00:24 |
janders | but I think I better add some unit tests and that may fix my problems | 00:24 |
janders | The problem is a "TypeError: 'BlockDevice' object is not iterable" in https://paste.openstack.org/show/b24ZbE3rYFX2noPciv9Y/ | 00:26 |
janders | not sure whether it's in _list_erasable_devices or erase_devices_express, guessing the latter | 00:26 |
janders | will have a harder look at the code :) | 00:27 |
janders | biggest issue is I had several months of a break from looking into bits like this so a bit rusty | 00:27 |
janders | mostly working on RedFishy stuff these days | 00:27 |
TheJulia | janders: I've done it in the past, but mainly by short circuiting ironic so it never proceeded | 00:36 |
opendevreview | Jacob Anders proposed openstack/ironic-python-agent master: Improve efficiency of storage cleaning in mixed media envs https://review.opendev.org/c/openstack/ironic-python-agent/+/818712 | 01:08 |
opendevreview | Merged openstack/networking-baremetal master: Set agent_type in tests https://review.opendev.org/c/openstack/networking-baremetal/+/832697 | 01:28 |
opendevreview | Jacob Anders proposed openstack/ironic-python-agent master: Improve efficiency of storage cleaning in mixed media envs https://review.opendev.org/c/openstack/ironic-python-agent/+/818712 | 01:36 |
janders | JayF TheJulia looks like I fixed it, just had a successful test run. Thanks for the hints anyway. | 02:03 |
opendevreview | Merged openstack/networking-baremetal stable/yoga: Set agent_type in tests https://review.opendev.org/c/openstack/networking-baremetal/+/832858 | 06:35 |
opendevreview | Merged openstack/networking-baremetal stable/yoga: Update .gitreview for stable/yoga https://review.opendev.org/c/openstack/networking-baremetal/+/832572 | 07:04 |
opendevreview | Merged openstack/networking-baremetal stable/yoga: Update TOX_CONSTRAINTS_FILE for stable/yoga https://review.opendev.org/c/openstack/networking-baremetal/+/832573 | 07:04 |
arne_wiebalck | Good morning, Ironic! | 07:32 |
rpittau | good morning ironic! o/ | 07:57 |
opendevreview | Riccardo Pittau proposed openstack/ironic-python-agent master: Add non-voting dib CentOS Stream 9 job https://review.opendev.org/c/openstack/ironic-python-agent/+/832947 | 08:11 |
dtantsur | morning folks | 08:18 |
rpittau | hey dtantsur :) | 08:19 |
dtantsur | FYI folks, the glance change has merged for https://review.opendev.org/c/openstack/ironic/+/826930, ready for approval now | 08:21 |
janders | good morning arne_wiebalck rpittau dtantsur and Ironic o/ | 09:13 |
rpittau | hey janders :) | 09:14 |
janders | arne_wiebalck I got https://review.opendev.org/c/openstack/ironic-python-agent/+/818712 (hybrid NVMe/HDD cleaning) to a working state. There's still some finishing off required but I would be keen to know 1) if you are in position to test it and 2) tell me what you think about general approach. I tested it in our lab and it seems to work well. Let | 09:16 |
janders | me know and I will pass the config required to save you digging. | 09:16 |
janders | rpittau thank you for your review and suggestion regarding ^^. Will work on unit tests next and will add that log line. | 09:17 |
opendevreview | Verification of a change to openstack/networking-baremetal master failed: Add Python3 zed unit tests https://review.opendev.org/c/openstack/networking-baremetal/+/832575 | 09:26 |
hjensas | I'm seeing an issue where conductor moves to provision state "inspect failed" for no apparent reason. https://paste.opendev.org/ | 10:04 |
hjensas | I'm using virtual switch appliances, performance is very poor, but I've bumped timeouts to 7200 and that happens after "just" 30ish minutes | 10:06 |
hjensas | oh, proper paste link: https://paste.opendev.org/show/bxFTmX0ofs7cDgNeAtm2/ | 10:07 |
hjensas | These are my timout settings: | 10:09 |
hjensas | crudini --set --existing /etc/ironic-inspector/inspector.conf DEFAULT timeout 7200 | 10:09 |
hjensas | crudini --set --existing /etc/ironic/ironic.conf conductor deploy_callback_timeout 7200 | 10:09 |
hjensas | crudini --set --existing /etc/ironic/ironic.conf pxe boot_retry_timeout 7200 | 10:09 |
hjensas | hm, there is an event failed: | 10:15 |
hjensas | Mar 09 21:33:17 openstack ironic-conductor[127396]: DEBUG ironic.common.states [None req-31a60bcb-2a09-4e25-bc9a-63d1406d91e7 None None] Exiting old state 'inspect wait' in response to event 'fail' {{(pid=127396) on_exit /opt/stack/ironic/ironic/common/states.py:325}} | 10:15 |
hjensas | oh, inspect_wait_timeout is probably what I need to bump. | 10:30 |
janders | arne_wiebalck dtantsur rpittau would you have some time to give me a couple hints how to go about unit tests for https://review.opendev.org/c/openstack/ironic-python-agent/+/818712? | 10:35 |
janders | It would be awesome to have this in Yoga, but I think I need to move reasonably quickly right? | 10:35 |
dtantsur | right | 10:38 |
dtantsur | what's the problem with tests? | 10:38 |
dtantsur | hjensas: yep | 10:38 |
janders | dtantsur they do not yet exist | 10:39 |
janders | and looking at existing tests I am not to sure where to start to get this done quickly | 10:39 |
janders | thinking what would be the simplest sufficient set | 10:40 |
janders | and which existing ones could it be based on | 10:40 |
janders | Unit tests aren't my strength but with test_hardware.py in IPA in particular there's a lot of mocks and other structures that need to be fabricated in exactly the right way | 10:41 |
dtantsur | janders: you pretty much need to mock _list_erasable_devices(), _nvme_erase() and destroy_disk_metadata() | 10:43 |
dtantsur | ah and _ata_erase() | 10:44 |
janders | do I need to make a list of block_devices with NVMes, HDDs, etc and feed it to the tests to see if the right cleaning mode is triggered for each type? | 10:45 |
janders | ( e.g. https://opendev.org/openstack/ironic-python-agent/src/branch/master/ironic_python_agent/tests/unit/test_hardware.py#L2107 ) | 10:47 |
dtantsur | janders: exactly | 10:49 |
dtantsur | I think we have plenty of examples in the existing tests | 10:49 |
dtantsur | (you probably only need a mock with a name though) | 10:49 |
dtantsur | iurygregory: hey, when are we planning the final Yoga releases? | 11:37 |
dtantsur | we got a bit of a delay with sprint 2, so it seems from https://releases.openstack.org/yoga/schedule.html that the final releases are this or next week? or? | 11:37 |
iurygregory | good morning ironic o/ | 11:42 |
iurygregory | hey dtantsur o/ | 11:42 |
iurygregory | Mar 21 - Mar 25R-1Final RCs and intermediary releases | 11:43 |
iurygregory | this would be the deadline we have | 11:43 |
dtantsur | yeah, that's the very last deadline | 11:44 |
iurygregory | we can focus on reviewing things we want to be included in the cycle till middle of next week | 11:44 |
iurygregory | and try to cut stable/yoga | 11:44 |
dtantsur | ack, I think it's sensible | 11:44 |
iurygregory | I will be keeping an eye on ironic, inspector, ipa, bifrost | 11:44 |
dtantsur | I think I have quite a few bifrost patches still :) and a few ironic ones | 11:45 |
iurygregory | yeah =) | 11:45 |
iurygregory | today I will be reviewing things | 11:45 |
iurygregory | Everyone, please take some time and evaluate the proposals for our PTG slots http://lists.openstack.org/pipermail/openstack-discuss/2022-March/027647.html =) | 11:46 |
opendevreview | Jacob Anders proposed openstack/ironic-python-agent master: Improve efficiency of storage cleaning in mixed media envs https://review.opendev.org/c/openstack/ironic-python-agent/+/818712 | 11:57 |
mgoddard | morning, I have a question about Ironic restarts. How well does Ironic conductor cope with restarts when there are transient operations (deploy, clean, etc) these days? | 11:59 |
dtantsur | mgoddard: 'wait' operations are kept, '-ing' operations are aborted | 12:16 |
mgoddard | dtantsur: ack, thanks | 12:16 |
*** pmannidi is now known as pmannidi|Away | 12:17 | |
opendevreview | Jacob Anders proposed openstack/ironic-python-agent master: Improve efficiency of storage cleaning in mixed media envs https://review.opendev.org/c/openstack/ironic-python-agent/+/818712 | 12:36 |
opendevreview | Jacob Anders proposed openstack/ironic-python-agent master: Improve efficiency of storage cleaning in mixed media envs https://review.opendev.org/c/openstack/ironic-python-agent/+/818712 | 13:25 |
opendevreview | Jacob Anders proposed openstack/ironic-python-agent master: Improve efficiency of storage cleaning in mixed media envs https://review.opendev.org/c/openstack/ironic-python-agent/+/818712 | 13:27 |
* TheJulia tries to wake up | 14:48 | |
dtantsur | TheJulia: don't | 14:49 |
dtantsur | :) | 14:49 |
TheJulia | does this mean I should just go get a pillow and sleep at my desk? | 14:50 |
dtantsur | sounds good! | 14:51 |
TheJulia | dtantsur: does that mean all my outstanding patches will be reviewed today?! | 14:52 |
dtantsur | probably? you never know before you try | 14:53 |
TheJulia | touche | 14:53 |
TheJulia | oooh ipa patch has review feedback | 14:55 |
dtantsur | see? it works! | 15:02 |
opendevreview | Julia Kreger proposed openstack/ironic-python-agent master: Create fstab entry with appropriate label https://review.opendev.org/c/openstack/ironic-python-agent/+/831029 | 15:04 |
TheJulia | rloo: rpittau ^^^ review feedback addressed | 15:04 |
rloo | TheJulia: +2 :) | 15:09 |
opendevreview | Julia Kreger proposed openstack/ironic stable/queens: Remove legacy experimental jobs https://review.opendev.org/c/openstack/ironic/+/827713 | 15:12 |
dtantsur | iurygregory, TheJulia, FYI I'm writing a response to dansmith's email with my -2 on prolonging grenade and deprecation period to 1 year skip-release. | 15:14 |
TheJulia | dtantsur: There is a balance that can be achieved, but projects will have to be smarter about not breaking users. i.e. "oh, your trying to do this, this should be that" sort of logic which makes the service a bit more graceful | 15:16 |
dtantsur | well, we have been doing it | 15:17 |
TheJulia | but not everyone has been doing it, and we could be a little better by carrying some migration stuff more than one cycle when it is in place | 15:17 |
dtantsur | well, the conversation is basically about switching to one year cadence, except that also have half-year releases | 15:17 |
iurygregory | dtantsur, ack, I've added to my list to read the email today since it has the PTL tag | 15:17 |
dtantsur | which is the worst of both worlds | 15:18 |
iurygregory | wow | 15:18 |
iurygregory | O.o | 15:18 |
dtantsur | realistically, we *barely* keep the current CI working | 15:18 |
TheJulia | dtantsur: well, yeah. Nobody was willing to concede their position in the TC from their extremes | 15:19 |
TheJulia | so we somehow ended up in to that, which in a sense is *kind* of what we've already been doing | 15:19 |
dtantsur | well... I was not going to have 1 year deprecation e.g. for netboot | 15:19 |
dansmith | dtantsur: FWIW, aside from one nova broken sanity check, wallaby->yoga "just worked" with no changes to any of the projects that grenade runs by default | 15:19 |
TheJulia | for some reason, there is a whole set of people who only want to ever release every six months | 15:19 |
* TheJulia doesn't grok it | 15:19 | |
dtantsur | dansmith: I won't be surprised if Ironic works too. but that's not something I want to commit to in the form of a voting job. | 15:20 |
dtantsur | we're very limited in resources, with half of this team only caring about ~ 1 year of lifetime | 15:20 |
TheJulia | (and that six months cycle has a complex loaded history which will make me depressed expressing) | 15:20 |
dansmith | dtantsur: so you were against 1yr cycles? | 15:20 |
dtantsur | dansmith: I'm not *that* much against 1yr cycles as I'm against a hybrid approach | 15:20 |
dtantsur | I may not even be against 1yr cycles at all if we can still produce our lightweight intermediate releases | 15:21 |
dansmith | dtantsur: of course you can, so I'm not sure why this is really any different :) | 15:21 |
TheJulia | dtantsur: ++ on limited resources, keeping CI working in older releases can be a huge lift when things change | 15:21 |
dansmith | keeping things working across a 1yr gap is pretty much the same regardless of if there's an intermediate release or two in between no? | 15:22 |
dtantsur | dansmith: the promise if very different. bugfix branches are maintained in a similar vain as EM branches: they're just a common place to share patches | 15:22 |
dtantsur | with e.g. no grenade and limited CI coverage | 15:22 |
dansmith | okay I still don't see how the hybrid approach is substantially more work | 15:22 |
TheJulia | If we skip major stable backports on the "upstream intermediates" then that actually reduces our burden after a year | 15:23 |
dtantsur | dansmith: much more CI, much longer life time | 15:23 |
dansmith | dtantsur: it's not longer life time, at all | 15:23 |
dtantsur | unless normal grenade is gone (which is explicitly not the case) and the life time of "tock" releases is much shorter | 15:23 |
dtantsur | I'm explaining the difference with bugfix branches | 15:23 |
dtantsur | they come with basically no life time guarantee | 15:24 |
dtantsur | TheJulia: the current stable policy disallows us skipping backports (for good reasons) | 15:24 |
dtantsur | we do skip backports for bugfix branches - another example of how they are best-effort | 15:24 |
TheJulia | dtantsur: maybe it is time that policy be revisited? | 15:25 |
dtantsur | TheJulia: if we stop supporting upgrades - sure | 15:25 |
dtantsur | otherwise there is a very good reason for not skipping backports: to avoid regressions on upgrade | 15:25 |
TheJulia | upgrades from, or stopping over on that release? | 15:25 |
dtantsur | if you backport a change from CC to AA skipping BB, then there will be a regression on AA -> BB | 15:26 |
TheJulia | I believe the desire is to move more towards skipleveling | 15:26 |
dtantsur | and since we support upgrades to/from "tock" releases, it's a problem | 15:26 |
TheJulia | well, from tock to tock on a bounds, should be fine if the job is voting | 15:26 |
TheJulia | but beyond that tock remaining tocks drop off | 15:27 |
TheJulia | I can see your point there | 15:27 |
TheJulia | if it is kept rolling forever, yeah | 15:27 |
iurygregory | but we will also have grenade voting, so it would be Dmitry's concern if I understood? | 15:27 |
TheJulia | somewhere, at some point, the tocks need to die quickly | 15:27 |
dtantsur | iurygregory: yeah, 2 grenades now: normal and skip-release | 15:28 |
iurygregory | yeah =( | 15:28 |
TheJulia | well, last commit and last release | 15:28 |
TheJulia | last major release | 15:28 |
dansmith | the tocks only ever need to support upgrading to/from adjacent ticks, you can't skip from one tock to another tock | 15:28 |
TheJulia | on the openstack stable model | 15:28 |
dtantsur | dansmith: sure, I was explaining why skipping backports is a problem | 15:28 |
iurygregory | I think we can say we will have a bomb instead of grenades lol =X | 15:28 |
TheJulia | lets not talk about bombs | 15:29 |
dtantsur | as to killing tocks quickly... we already barely support N-3 | 15:29 |
TheJulia | enough people are worried about the current state of geopolitical affairs. | 15:30 |
TheJulia | we need a word for "bringer of sadness" | 15:30 |
TheJulia | perhaps something in german? | 15:30 |
TheJulia | Label every single job that | 15:30 |
dtantsur | Scheissursache? dunno, arne_wiebalck can invent a better one | 15:30 |
TheJulia | :) | 15:30 |
iurygregory | yeah | 15:31 |
TheJulia | Well, the burden now is we quite literally jump across every release and we explicitly expect people to to do the same when upgrading. Reality is... people don't always (and the fact they don't explode is good now), but our testing doesn't represent or account for it and we've been a bit aggressive in even trying to enable it to be a thing in the past | 15:31 |
dtantsur | honestly, the openshift approach to upgrades (upgrade often and in smaller chunks) is growing on me | 15:32 |
TheJulia | moving forward, a job, even non-voting can start to move us on that path as long as we eventually enable it to be voting | 15:32 |
TheJulia | and then remove it after so long | 15:32 |
dtantsur | TheJulia: this is a good argument for a 1 year cycle | 15:32 |
dtantsur | not sure why keep "tock" releases then | 15:32 |
TheJulia | yeah | 15:33 |
dtantsur | projects that are consumed often (ironic, swift?) can keep bugfix releases with whatever promises they define | 15:33 |
dansmith | dtantsur: I really don't understand -- you want 1yr cycles but you also want to do lots of intermediate ones? but don't want tocks? :) | 15:33 |
dtantsur | dansmith: I don't want to commit to too much. if we want 1yr cycles - fine. let's just do 1yr cycles. | 15:33 |
dansmith | you still need to test from the previous major release to all those intermediate ones no? | 15:33 |
TheJulia | The whole frustrating thing of this is really the coordinated release is more about problems and needs unrelated to the software's existence and testing | 15:33 |
dtantsur | dansmith: *I* do not. | 15:33 |
dtantsur | that's simply not our mode of operation | 15:34 |
dtantsur | (our = metal3/openshift) | 15:34 |
iurygregory | yeah | 15:34 |
iurygregory | we need the bugfix to be able to have a better integration with metal3/openshift | 15:35 |
dansmith | okay, well, this is definitely about committing to more, that's kinda the point... because it seems a subset of people want it | 15:35 |
dtantsur | I don't doubt it, but then the "subset of people" should contribute more | 15:35 |
iurygregory | I will try to push a patch with the new upgrade job to see how it goes also | 15:35 |
dansmith | so I understand the push back on it taking more resources, I just think this is the least change and least amount of extra work of the options we realistically have | 15:36 |
dtantsur | well.. we're struggling the way we are already | 15:36 |
dansmith | iurygregory: fwiw, writing that job helped me uncover an issue in nova, which also affected our (redhat) FFU, which I was able to get fixed upstream before it bit us later | 15:36 |
dtantsur | and what this suggests STILL doesn't help $SOME_COMPANY that is basing its product on Train | 15:36 |
dansmith | so from our (redhat) perspective it has already paid off, even without the skip-level aspect | 15:37 |
iurygregory | dansmith, nice! =) | 15:37 |
dtantsur | I wish Red Hat provided a 3rd party CI for projects it cares about | 15:38 |
dansmith | I'm quite un-proud nova was the only thing I found, and it was pretty trivial, but still :D | 15:38 |
dtantsur | I remember the fate of the multinode grenade on Ironic... | 15:39 |
TheJulia | oh.. that one | 15:39 |
dtantsur | so yeah, we did rolling upgrades, added a job.. then it kept breaking, we made it non-voting... now it's experimental | 15:39 |
dtantsur | I'm worried that the new job will suffer the same fate | 15:39 |
TheJulia | that one largely had problems and still does because of networking | 15:39 |
dtantsur | (and I'll *definitely* mark it non-voting if it starts causing troubles) | 15:39 |
dtantsur | true that. but the regular grenade has also been problematic | 15:39 |
TheJulia | one or two of the cloud providers has weird MTU restrictions which completely blows up our packets across the vxlan tunnel which is setup | 15:40 |
TheJulia | and given network booting, it is super sensitive to that | 15:40 |
dtantsur | and I'm very unimpressed about the perspective to expand the deprecation period | 15:40 |
TheJulia | and *boom* | 15:40 |
TheJulia | we actually had it super stable before that cloud went into place | 15:40 |
dtantsur | MTU is always a problem. I have a question about it that I ask on job interviews nowdays :) | 15:40 |
TheJulia | hehehe | 15:40 |
dtantsur | because I've recently had the DNS haiku, but with MTU | 15:41 |
TheJulia | Yesterday was spanning tree | 15:41 |
TheJulia | for the 32767 time | 15:41 |
iurygregory | now if it's not DNS we can say it's MTU? | 15:41 |
TheJulia | we deal with the pysical, it can be spanning tree! | 15:41 |
TheJulia | physical | 15:42 |
iurygregory | I saw your tweet yesterday about it | 15:42 |
* TheJulia needs to wake up before the next meeting | 15:42 | |
TheJulia | ... in 20 minutes | 15:42 |
dtantsur | iurygregory: if it's a weird traceback on the server side, it can be MTU with a very high probability | 15:42 |
TheJulia | or someone stepped on the thinnet cable | 15:52 |
TheJulia | the packets became attenuaated and decided to meet the floor | 15:53 |
dtantsur | heh | 15:56 |
* dtantsur screams in yaml at kubernetes | 15:56 | |
dtantsur | you've probably heard me calling tripleo overcomplicated? forget it, I just didn't know kubernetes back then. | 15:56 |
iurygregory | wow | 15:57 |
TheJulia | For the longest time when I was growing up, we ran a serial cable and ran PPP from one end of the house to the other end... and then finally got some 10base2 gear and replaced the serial cable. We ended up having to buy cable covers to lay on the floor... and we *eventually* went to 10baset and 100baset because if you disturbed the thinnet cable, bad things would happen. | 15:58 |
TheJulia | "Browser load stopped because corgi is sitting on the cable" | 15:58 |
dtantsur | lovely :D | 15:58 |
TheJulia | actually, that was a collie then | 15:58 |
dtantsur | corgi-based firewall | 15:58 |
dtantsur | colliewall \o/ | 15:58 |
*** gmann is now known as gmann_afk | 16:13 | |
*** gmann_afk is now known as gmann | 16:29 | |
TheJulia | iurygregory: are you going to request another networking-baremetal release? | 17:08 |
arne_wiebalck | dtantsur: your German is improving rapidly I see :-D | 17:26 |
dtantsur | :D | 17:26 |
opendevreview | Verification of a change to openstack/networking-baremetal master failed: Add Python3 zed unit tests https://review.opendev.org/c/openstack/networking-baremetal/+/832575 | 17:28 |
rpittau | good night! o/ | 17:30 |
* TheJulia digs into logs to see why that failed | 17:31 | |
opendevreview | Merged openstack/ironic stable/queens: Remove legacy experimental jobs https://review.opendev.org/c/openstack/ironic/+/827713 | 17:53 |
opendevreview | Merged openstack/sushy stable/ussuri: Protect Connector against empty auth object https://review.opendev.org/c/openstack/sushy/+/832826 | 17:58 |
opendevreview | Merged openstack/sushy stable/train: Protect Connector against empty auth object https://review.opendev.org/c/openstack/sushy/+/832827 | 17:58 |
opendevreview | Merged openstack/sushy stable/xena: Fix session authentication issues https://review.opendev.org/c/openstack/sushy/+/832860 | 17:58 |
opendevreview | Merged openstack/sushy stable/yoga: Fix session authentication issues https://review.opendev.org/c/openstack/sushy/+/832859 | 17:58 |
opendevreview | Merged openstack/sushy stable/victoria: Fix session authentication issues https://review.opendev.org/c/openstack/sushy/+/832864 | 17:58 |
opendevreview | Merged openstack/sushy stable/wallaby: Fix session authentication issues https://review.opendev.org/c/openstack/sushy/+/832862 | 17:58 |
opendevreview | Merged openstack/sushy stable/train: Fix session authentication issues https://review.opendev.org/c/openstack/sushy/+/832867 | 17:59 |
opendevreview | Merged openstack/sushy stable/ussuri: Fix session authentication issues https://review.opendev.org/c/openstack/sushy/+/832866 | 17:59 |
TheJulia | \o/ | 18:01 |
arne_wiebalck | bye everyone o/ | 18:20 |
TheJulia | goodnight | 18:21 |
dtantsur | I'll go as well, see you | 18:26 |
TheJulia | goodnight as well | 18:31 |
opendevreview | Harald Jensås proposed openstack/networking-baremetal master: WIP - OpenConfig YANG and Netconf https://review.opendev.org/c/openstack/networking-baremetal/+/832268 | 19:31 |
iurygregory | TheJulia, the patches merged in stable/yoga already? too many emails today, sorry | 20:27 |
TheJulia | iurygregory: I think so, but there was a failure I ... think | 20:48 |
TheJulia | unit test failed | 20:49 |
iurygregory | woot | 20:51 |
iurygregory | let me check things to see | 20:52 |
TheJulia | yeah, so it was just a timeout | 20:52 |
TheJulia | I didn't see a cause | 20:52 |
TheJulia | rechecking it | 20:52 |
*** pmannidi|Away is now known as pmannidi | 21:38 | |
*** pmannidi is now known as pmannidi|Away | 21:43 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!