rpittau | good morning ironic! o/ | 08:10 |
---|---|---|
Continuity_ | Morning o/ | 09:13 |
*** Continuity_ is now known as Continuity | 09:13 | |
opendevreview | Riccardo Pittau proposed openstack/ironic master: Remove python 3.6 mock hack https://review.opendev.org/c/openstack/ironic/+/887023 | 09:30 |
iurygregory | good morning Ironic | 11:18 |
opendevreview | Iury Gregory Melo Ferreira proposed openstack/ironic master: RedfishFirmware Interface https://review.opendev.org/c/openstack/ironic/+/885425 | 11:20 |
Kirill_ | Hi, Maybe someone can help, i'm working with neutron client method - show_port, but i have >10 ports, Do we have any possibilies to get info for all these ports in on request instead of calling show_port(port) several times. Thanks | 12:24 |
TheJulia | Kirill_: more than 10 ports?! | 13:05 |
Kirill_ | yep, i use list_ports to get trunk ports, then i from "trunk_details" getting all subports ids and need to get info for each subport. in that case i calling show_port | 13:08 |
Kirill_ | right now i got answer from neutron - that i still have to call show_port for each id( | 13:09 |
TheJulia | I thought it was 100 ports, but yeah, best to do specific port lookups if you know the data you need, just don't understand why you need to walk the ports | 13:13 |
Kirill_ | in nearest future it will be > 100 ports. | 13:19 |
TheJulia | I'm still waking up, but I don't understand why | 13:20 |
Kirill_ | i want to return to user list of bms with trunck+subports | 13:20 |
opendevreview | Julia Kreger proposed openstack/ironic master: Fix db migration tests for sqlalchemy 2.0 https://review.opendev.org/c/openstack/ironic/+/887432 | 13:27 |
opendevreview | Julia Kreger proposed openstack/ironic master: Add job to test with SQLAlchemy master (2.x) https://review.opendev.org/c/openstack/ironic/+/886020 | 13:27 |
TheJulia | perhaps start with asking ironic for each node's vifs? | 13:28 |
TheJulia | there is no special flag afaik in neutron | 13:28 |
TheJulia | I guess test_walk_versions is still deadlocking sometimes | 13:54 |
TheJulia | I guess more reason to push it into it's own job to allow us to continue working and then hopefully figure out exactly what is going on | 13:54 |
opendevreview | Julia Kreger proposed openstack/ironic master: DNM: Add more debugging to tie test class, test, state https://review.opendev.org/c/openstack/ironic/+/887168 | 14:08 |
iurygregory | ++ | 14:10 |
TheJulia | That might help, because we should see some debugging at where it halts | 14:13 |
TheJulia | yeah, we should merge the split off | 14:28 |
TheJulia | I looked at some of my other changes and some of them are still impacted in there | 14:29 |
JayF | fyi might be a couple minutes late getting the meeting started but I am here | 14:57 |
JayF | got back in time, good stuff | 15:00 |
JayF | #startmeeting ironic | 15:00 |
opendevmeet | Meeting started Mon Jul 3 15:00:22 2023 UTC and is due to finish in 60 minutes. The chair is JayF. Information about MeetBot at http://wiki.debian.org/MeetBot. | 15:00 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 15:00 |
opendevmeet | The meeting name has been set to 'ironic' | 15:00 |
dtantsur | o/ | 15:00 |
JayF | I would anticipate an ill-attended, short meeting as tomorrow is a US federal holiday and today is a popular day to take off :D | 15:00 |
TheJulia | o/ | 15:00 |
TheJulia | I was going to take today off | 15:00 |
JayF | #topic Announcements/Reminder | 15:00 |
JayF | #note Standing reminder: review patches tagged #ironic-week-prio, and tag your patches for priority review | 15:01 |
JayF | #topic Review previous action items | 15:01 |
iurygregory | o/ | 15:01 |
dtantsur | I may add the next inspection patch to ironic-week-prio once I test it | 15:01 |
JayF | Just a reminder that rpittau has an action to moderate our meeting the next two weeks. Next week (starting a week from today) I will be out of office and out of country for a week, so don't expect me around :D | 15:01 |
JayF | #topic Review Ironic CI Status | 15:02 |
rpittau | o/ | 15:02 |
JayF | Where are we? | 15:02 |
rpittau | that's very phylosophical | 15:02 |
dtantsur | Well, my patch has seen the first green for a long time. So not too bad at least? | 15:02 |
rpittau | JayF: I think we fixed/workarounded most of the issues | 15:02 |
JayF | Applying extremely cautious optimism lol | 15:03 |
JayF | Are there any outstanding CI related patches we need to review or land? | 15:03 |
rpittau | mmm not on ironic AFAICS | 15:03 |
TheJulia | There is one for sqlalchemy 2.0, to fix migrations | 15:03 |
rpittau | ah yeah, that one | 15:03 |
TheJulia | and there is the mysql split out patch | 15:04 |
iurygregory | yeah | 15:04 |
TheJulia | I'd split it and possibly merge additional troubleshooting | 15:04 |
JayF | Both of those sound good to me, you wanna link them or just we'll find and land them afterwards in any event | 15:04 |
iurygregory | some of them we need in stable / bugfix branches | 15:04 |
rpittau | I rechecked the sqla one | 15:04 |
TheJulia | ++ | 15:05 |
JayF | ++ lets backport whatever is needed to stable branches; but I don't feel like that should be a rush until/unless we have patches to backport there tbh | 15:05 |
JayF | I trust us all to sanely prioritize | 15:05 |
JayF | Aight, going to move on | 15:05 |
JayF | #topic 2023.2 Workstream | 15:05 |
JayF | #link https://etherpad.opendev.org/p/IronicWorkstreams2023.2 | 15:05 |
JayF | I will note that many things seem to be pending review; I'll be taking time to review today | 15:05 |
JayF | Thanks for updating that dtantsur | 15:06 |
dtantsur | btw, have we had a chance to say welcome (back) to masghar? | 15:07 |
dtantsur | Mahnoor is helping me with the inspector merger work and will take over more tasks as we go | 15:07 |
iurygregory | I don't think we did | 15:07 |
JayF | masghar: welcome (back?) I'm not sure we've ever met but any friend of ironic+dtantsur gets adopted by me :D | 15:08 |
iurygregory | welcome masghar =) | 15:08 |
dtantsur | Mahnoor participated in outreachy, I think TheJulia was her mentor | 15:08 |
JayF | oh, wonderful! | 15:08 |
JayF | heck yeah | 15:08 |
JayF | There is another former member of our community coming back, for at least a small stint | 15:08 |
JayF | but I'll let them make the announcement to that larger group when time comes | 15:08 |
JayF | Moving on | 15:09 |
JayF | #topic Open Discussion | 15:09 |
JayF | I had a note here on PTL availability, I will not be here next week as noted before and am planning to miss the next two-ish meetings due to travel. | 15:09 |
JayF | Anything else for open discussion? | 15:09 |
iurygregory | I probably have something (sorry didn't add to the agenda) | 15:10 |
JayF | it's open discsusion :D | 15:10 |
iurygregory | :D | 15:10 |
TheJulia | \o/ | 15:10 |
iurygregory | ok, some of you probably remember a problem we had related to multipah and we added a lot of logic on IPA to be able to handle things | 15:10 |
opendevreview | Merged openstack/ironic master: Use jammy for base jobs https://review.opendev.org/c/openstack/ironic/+/869052 | 15:11 |
rpittau | \o/ | 15:11 |
* dtantsur hears multipath and runs away screaming | 15:11 | |
rpittau | I actually tried hard to forget about that mpath stuff | 15:11 |
dtantsur | no amount of alcohol can wash this out of memory | 15:12 |
iurygregory | we have an interesting bug downstream, where inspection is timing out (takes more than 30min), because the machine has a loooooot of disks and we check all of them I think | 15:12 |
rpittau | but I guess we'll hear more about it :/ | 15:12 |
iurygregory | +80 disks if I recall | 15:12 |
dtantsur | iurygregory: define "check" please. or is it unclear yet? | 15:12 |
* iurygregory looks for the tab with the information | 15:13 | |
zorun | is the issue about multipath in Linux + some NVMe disks? | 15:13 |
zorun | (hi there) | 15:13 |
TheJulia | who said mpath?!/ | 15:13 |
* TheJulia hides | 15:13 | |
* TheJulia builds a bunker | 15:13 | |
JayF | We should proabbly ensure the behavior is documented in a launchpad bug | 15:14 |
JayF | then go from there? | 15:14 |
JayF | sounds like one in a long line of "ridiculously large hardware causes edge case" bugs we've been squashing for a decade :D | 15:14 |
TheJulia | iurygregory: your really going to need to be specific on what is being encountered | 15:14 |
TheJulia | because what JayF said :) | 15:15 |
JayF | Is there any further specifics on this or something else for open discussion? | 15:16 |
iurygregory | is not an error, it's a timeout issue because ipa doesn't report all info in 30 min, because they have a lot of disks and we will do all the checks from _get_multipath_parent_device etc | 15:16 |
iurygregory | so we take a lot of time and fails | 15:16 |
JayF | yeah, in that case I'd probably adjust timeouts to reflect the reality of that environment | 15:16 |
JayF | but we should likely have a way to turn off some of that if it's taking forever, too | 15:16 |
iurygregory | so I'm wondering if we have some ideas on how to avoid this taking a lot of time | 15:17 |
JayF | I suspect it's probably reproducable in a unit test; most of what takes a long time is probably in the python parsing, yeah? | 15:17 |
iurygregory | instead of just increasing timeout | 15:17 |
iurygregory | maybe the logic we adde to not clean some devices can be used? like "I don't want IPA to check things on /dev/sda, /dev/sdb...." | 15:18 |
JayF | I suspect there may be a more straightforward fix | 15:18 |
JayF | but until there's a bug with details we can look at alongside the code we're just guessing :D | 15:19 |
* TheJulia is on a call so trying to digest | 15:19 | |
iurygregory | (currently I don't think we are using the feature to skip downstream...) | 15:19 |
rpittau | you really need to know in advance very well ytour disks, that is not always trivial | 15:19 |
iurygregory | things are not trivial when they have like +80 disks :D | 15:19 |
JayF | lets continue this talk outside of the logged meeting? | 15:19 |
JayF | #endmeeting | 15:19 |
opendevmeet | Meeting ended Mon Jul 3 15:19:53 2023 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:19 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/ironic/2023/ironic.2023-07-03-15.00.html | 15:19 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/ironic/2023/ironic.2023-07-03-15.00.txt | 15:19 |
opendevmeet | Log: https://meetings.opendev.org/meetings/ironic/2023/ironic.2023-07-03-15.00.log.html | 15:19 |
iurygregory | yeah np | 15:20 |
rpittau | you should not have a direct correspondence between disks and mpath devices though | 15:21 |
TheJulia | well, really you do. | 15:21 |
TheJulia | /dev/mpath0 may be backed by /dev/sdx /dev/sda and /dev/sdn | 15:21 |
dtantsur | I'd still be curious to learn which exactly process takes so much time on each disk. | 15:22 |
TheJulia | +++++ | 15:22 |
TheJulia | they should all be super fast checks | 15:22 |
dtantsur | that's nearly half a minute per disk | 15:23 |
dtantsur | if we figure it out, we can try running $thing in parallel | 15:23 |
JayF | ++ | 15:23 |
iurygregory | dtantsur, derek found things and we have some info in the internal slack I think... | 15:23 |
JayF | I may be less available than usual today on IRC; going to be migrating my IRC bastion server at some point today | 15:24 |
opendevreview | Verification of a change to openstack/ironic-python-agent-builder master failed: Extend the DIB_CHECKSUM variable usage https://review.opendev.org/c/openstack/ironic-python-agent-builder/+/881299 | 15:24 |
dtantsur | well, we'll need to bring this info upstream anyway | 15:24 |
JayF | Upstream cannot participate in conversations without some redacted public info :D | 15:24 |
iurygregory | dtantsur, yeah | 15:24 |
iurygregory | I will try to extract the info from the thread | 15:24 |
rpittau | good night! o/ | 15:57 |
TheJulia | iurygregory: some solid way to identify to exclude, I guess... | 16:03 |
TheJulia | iurygregory: the bug your chasing, does the customer have *any* multipath devices in that physical server, it sounds like it is just a storage node | 16:30 |
TheJulia | which makes me think it *shouldn't* | 16:30 |
TheJulia | so maybe the path is to enable a kernel command line to disable multipath checking?!? | 16:30 |
TheJulia | I'm *guessing* here, but it sounds like list_all_block_devices gets called, and because the ramdisk has running multipath tools, it still attempts to do the inverse resolution to provide an accurate list | 16:31 |
TheJulia | perhaps we upfront just run multipath -ll and cache that?! | 16:32 |
JayF | Is there any reason we should *enable* multipath support by default? | 16:32 |
JayF | if we added the flag, it seems like default off is the saner setting, given the rarity (is my experience wrong?) of that hardware config | 16:32 |
TheJulia | okay, we ask it to check the device and then pull the data, and on some hardware will always have a entry there | 16:34 |
TheJulia | in enterprise environments, it is still sometimes a thing | 16:34 |
TheJulia | and we break pretty hard without checking | 16:35 |
TheJulia | and otherwise, they end up trying to clean the same lun 4-8 times | 16:35 |
JayF | I'm mentally modelling this to a single disk still setup in RAID mode on a controller | 16:35 |
JayF | multipath support being enabled but not really in full use so we have to be aware of it anyway (?) | 16:35 |
TheJulia | so... my laptop *used* to have multipath installed.... | 16:35 |
TheJulia | well, more like, if we don't know it, then we do bad htings | 16:36 |
JayF | this is mainly showing my giant gaping blind spot for storage tech | 16:36 |
TheJulia | like... clean for a week | 16:36 |
JayF | I've literally never worked on non-local storage in a production environment ever, so I'm just asking questions to try and understand :D | 16:36 |
JayF | if it's a thing that's realtively common, it's a thing | 16:36 |
TheJulia | think of having a disk on the other side of the city, but you can take 4-8 different ways to get there | 16:37 |
TheJulia | and if you don't know it, you may just think they are 4-8 unique devices | 16:37 |
JayF | so the multipath "failure mode" is N os devices for N paths, at least in some cases | 16:37 |
TheJulia | so then you do 4-8 times the work driving to the exact same disk in the end because you can't tell them apart without checking | 16:37 |
TheJulia | yeah | 16:37 |
JayF | so we really have to check hard to ensure that's not happening because then we do N things per device | 16:37 |
TheJulia | *OR* | 16:37 |
JayF | which can be materially impacting to a drive to shred it so much (depending on config) | 16:38 |
TheJulia | "only 1 or 2 of 8 paths is available for use, all others blow up" | 16:38 |
JayF | even regardless of clock time elapsed | 16:38 |
TheJulia | "or cause the san to get VERY angry" | 16:38 |
* TheJulia has crashed SAN controllers by making them VERY angry | 16:38 | |
JayF | that makes a lot of sense | 16:38 |
JayF | we have to do the expensive annoying but rareish thing | 16:38 |
JayF | because of how spectacularly it breaks if we don't | 16:38 |
TheJulia | yup | 16:38 |
JayF | like, go set off some sans to celebrate tomorrow levels of spectacualar I'm sure | 16:39 |
JayF | 'is that a smoke and laser show?' 'nope, just made the EMC angry!' | 16:39 |
TheJulia | lol | 16:39 |
TheJulia | yup | 16:39 |
TheJulia | "Why is my sql database corrupt" "oh, the agent clobbered the wrong controller and the san tried to do the needful, but your database VM had IO requests which took 31 seconds due to direct io locking... sorry" | 16:40 |
TheJulia | I'm kind of with you, timeouts or a disable mpath for this node knob | 16:41 |
JayF | that's when the CTO is the fireworks | 16:41 |
TheJulia | indeed | 16:41 |
JayF | I honestly feel like timeouts are the real answer | 16:41 |
TheJulia | And things like DR/BC begin to get thrown about | 16:41 |
JayF | it's not unreasonable to say "you have extreme hardware, it takes longer to deal with" | 16:41 |
TheJulia | "to do it right" | 16:41 |
TheJulia | yeah | 16:41 |
JayF | are those 80 physical disks on the server? | 16:41 |
JayF | or are they 80 attached via a SAN/LUN/etc? | 16:41 |
JayF | the other thing this could be a sign of -> the need to support attached storage more as a first class, so Ironic can do intellegent things with it instead of just puppeting the nodes to | 16:42 |
* TheJulia looks | 16:42 | |
JayF | I don't know the domain enough to know if that even makes sense, it just seems like an obvious question | 16:42 |
TheJulia | depends on the SAN actually | 16:43 |
JayF | this *is* composable hardware, even if it's such an old school form of it that we may not mentally model it that way | 16:43 |
TheJulia | since there is variety on if they are different luns or not | 16:43 |
TheJulia | oh of course, the customer redacted the device id info | 16:44 |
TheJulia | and the scsi id path data | 16:44 |
JayF | I'd hate for you to mount their iscsi disk from a physically different network, you rascal | 16:45 |
TheJulia | so yeah, 80 LUNs, 4 paths per LUN | 16:45 |
TheJulia | like... who does that | 16:45 |
TheJulia | so yeah, we check/scan each "device" so... basically end up checking like 320 disks which can take 30 seconds to double check | 16:47 |
JayF | that seems exceedingly reasonable | 16:47 |
JayF | 320 paths in 30 seconds would be asking it to do 10+ a second | 16:47 |
TheJulia | well, we do it one disk at a time | 16:48 |
TheJulia | because we want the latest data | 16:48 |
JayF | I would also think it's tough to assume that a parallel check would return valid data | 16:48 |
JayF | given all the "sans act weird" conversation we've had in the last 15 minutes | 16:48 |
TheJulia | yeah | 16:49 |
TheJulia | iurygregory: when you get a chance, lets talk, I've got a lunch pancake thing starting in 11 minutes | 16:49 |
iurygregory | TheJulia yeah it's a storage node (with ceph) | 16:59 |
iurygregory | I just finished my lunch XD | 17:00 |
TheJulia | we could split multipath -c out and multi-thread/block, and then do the data check | 17:01 |
TheJulia | if we limit it to like 10 concurrently, it shouldn't be horrible | 17:01 |
iurygregory | kinda makes sense to me | 17:02 |
iurygregory | first I think I will try to ask them to test increasing the timeout to see how it goes | 17:04 |
iurygregory | hopefully they will be unblocked and I wouldn't need to rush in thinking on how to improve our checks | 17:05 |
TheJulia | Well, I think they are *very* much a not intended environment config | 17:09 |
TheJulia | Ceph storage nodes, the intent and performance design is local disks for the multithreaded ip, but use of a San is just centralizing the San to be the io bottleneck | 17:10 |
iurygregory | I'm wondering if they are using a San to plug in the server or if is just a server with a bunch of disks | 17:15 |
iurygregory | I can check some info tomorrow (the TAM from the bug is OOO) | 17:15 |
TheJulia | Yeah, they surely are to have this state | 17:17 |
TheJulia | And granted, sans are highly optimized, it even then still… | 17:17 |
iurygregory | yeahhh | 17:24 |
iurygregory | if someone has time to review https://review.opendev.org/c/openstack/ironic/+/887297 so we can have a separate job to mysql =) | 17:50 |
JayF | opening | 17:50 |
JayF | trying to frantically get python-ironicclient support for sharded done | 17:51 |
JayF | since b-2 is this week | 17:51 |
JayF | lol | 17:51 |
JayF | https://review.opendev.org/c/openstack/ironic/+/887343/3/ do we not want this anymore? | 17:57 |
iurygregory | I think it's already included in the squash that rpittau did | 18:00 |
* iurygregory double checks | 18:00 | |
iurygregory | https://review.opendev.org/c/openstack/ironic/+/887373 | 18:01 |
iurygregory | yup it's | 18:01 |
TheJulia | Oh, I guess I should do parent_node | 18:07 |
opendevreview | Jay Faulkner proposed openstack/python-ironicclient master: Node sharding support https://review.opendev.org/c/openstack/python-ironicclient/+/887533 | 18:14 |
JayF | abandoned that patch then | 18:15 |
JayF | for the ironicclient change, I wasn't sure how much logic I should put in the client about permitted/unpermitted searches | 18:15 |
JayF | right now, you can only search for sharded=True/False (with no other attributes) | 18:16 |
JayF | we validate that in the API code; I was thinking duplicating that validation in the client was the wrong thing, given the existing code | 18:16 |
JayF | if that's wrong please say so (preferably in code review) | 18:16 |
JayF | I'm going to test this as soon as my bifrost test env is up then hopefully it's done | 18:16 |
TheJulia | oh... my... soooo hot | 18:19 |
TheJulia | yeah, I think that is fine just to keep it simple | 18:20 |
JayF | I gotta show off my new test environment setup that I did this weekend | 18:20 |
JayF | if you have time for an aside; go look at netboot.xyz | 18:20 |
JayF | someone hacked ipxe to basically turn it into a menu of internet-installable distribution/livecd/etc options | 18:21 |
JayF | so I can click a few buttons in virt manager, bridge the vm, pxe boot, and provision a machine in almost any distribution immediately | 18:21 |
JayF | which I mean, we're openstack, that shouldn't be super impressive but I finally did it for my homelab lol | 18:21 |
JayF | shoemaker's children have no shoes and all that | 18:21 |
JayF | TheJulia: fyi https://review.opendev.org/c/openstack/releases/+/887497 is the release I'm holding for any client stuff we need, it's due 7/6 aiui | 18:23 |
TheJulia | oh wonderful | 18:24 |
TheJulia | okay, so I guess I'm working today | 18:24 |
TheJulia | or.. maybe I don't stress on it | 18:24 |
JayF | I mean, I worry about it b/c the nova bits are coming for sharding | 18:25 |
JayF | for parent/child node it seems ... environment specific enough that someone getting a newer client wouldn't be awful | 18:25 |
JayF | but I trust you to make that call | 18:25 |
TheJulia | yeah, the whole kit and kaboodle for dpus/smartnics and all, I'm thinking more end of year mentally | 18:25 |
JayF | and I also suspect the change is super duper duper trivial for the python-ironicclient | 18:25 |
TheJulia | so many pieces to get to line up there and it is just going to move at it's own speed | 18:25 |
JayF | and looks a lot like mine | 18:25 |
TheJulia | well, I also have to do value setting and whatnot | 18:26 |
JayF | does value setting not come for free? | 18:26 |
TheJulia | anyway, give me a few to look | 18:26 |
JayF | it looked like it came for free in the client | 18:26 |
JayF | if not my change is incomplete | 18:26 |
TheJulia | it might | 18:26 |
TheJulia | well, | 18:26 |
TheJulia | for get but on the node object | 18:26 |
* TheJulia will look in a few | 18:26 | |
JayF | TheJulia: Node.update() is basically just yolo-putting fields into a JSON patch | 18:27 |
JayF | so I'm 99.99% sure you get it free | 18:27 |
JayF | I'll find out shortly b/c my ironic just came up in my bifrost test to test this with | 18:28 |
TheJulia | something feels like something is missing | 18:33 |
TheJulia | heh | 18:33 |
TheJulia | okay | 18:33 |
TheJulia | okay, it is osc stufs which might not be needed in your csae | 18:35 |
JayF | What OSC stuff? | 18:38 |
TheJulia | oh, you need to add osc unset and set support | 18:38 |
TheJulia | for the shard value | 18:38 |
TheJulia | openstack command | 18:38 |
JayF | https://github.com/openstack/python-ironicclient/blob/master/ironicclient/osc/v1/baremetal_node.py#L1180 | 18:40 |
JayF | ack, will do | 18:40 |
JayF | I realize why this was so confusing to me before but it was so clear now | 18:40 |
JayF | I was only in osc/ before and missed the other code | 18:40 |
JayF | this time I missed osc/ and was in the actual code | 18:40 |
JayF | whee | 18:40 |
* JayF not known for his attn to detail | 18:41 | |
opendevreview | Julia Kreger proposed openstack/python-ironicclient master: WIP: Parent_node support https://review.opendev.org/c/openstack/python-ironicclient/+/887535 | 18:45 |
TheJulia | nowhere from complete, butmaybe helps draw lines | 18:45 |
TheJulia | I'm going to go into town with the wife | 18:45 |
TheJulia | I'm not going to stress on parent_node right now, I just don't have the spoons on top of our homes A/C being dead | 18:47 |
TheJulia | (that was yesterday!) | 18:47 |
opendevreview | Jay Faulkner proposed openstack/python-ironicclient master: Node sharding support https://review.opendev.org/c/openstack/python-ironicclient/+/887533 | 19:53 |
JayF | TheJulia: ^ tested working \o/ | 19:53 |
JayF | thank you for the pointer | 19:53 |
opendevreview | Jay Faulkner proposed openstack/python-ironicclient master: Node sharding support https://review.opendev.org/c/openstack/python-ironicclient/+/887533 | 19:56 |
JayF | and now pep8 won't hate it either lol | 19:56 |
opendevreview | Verification of a change to openstack/ironic master failed: Unit tests: Isolate mysql test migrations https://review.opendev.org/c/openstack/ironic/+/887297 | 20:12 |
JayF | productive quick day so far today | 20:22 |
JayF | I always feel like I get 10x as much done after getting back from being sick | 20:22 |
opendevreview | Verification of a change to openstack/ironic master failed: Unit tests: Isolate mysql test migrations https://review.opendev.org/c/openstack/ironic/+/887297 | 20:36 |
JayF | stevebaker[m]: hang tight and I'll have that revised, tyvm -- this one has to be in before 7/6 so trying to make sure it's on a tight cycle | 21:05 |
stevebaker[m] | OK I'll rereview as soon as its revised | 21:06 |
opendevreview | Jay Faulkner proposed openstack/python-ironicclient master: Node sharding support https://review.opendev.org/c/openstack/python-ironicclient/+/887533 | 21:22 |
JayF | In general, we should look at libraries | 21:22 |
JayF | they get cut b-2 | 21:22 |
JayF | which is 7/6/2023 (3 days) | 21:22 |
stevebaker[m] | +2! | 21:43 |
JayF | \o/ thanks | 21:43 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!