Thursday, 2025-11-20

cardoeI wandered off. I'll grab logs tomorrow though01:52
TheJuliaokay, cool. I have another idea which may be worthwhile to consider, but logs first02:32
rpittaugood morning ironic! o/08:01
*** sfinucan is now known as stephenfin10:16
rpittauCI is fubar! \o/ 10:57
rpittaunew oslo.process version does not support no_fork in ServiceLauncher?10:57
rpittauoh it never supported it oO11:05
rpittauI thin kwe should just switch to process launcher11:06
*** BertrandLanson[m] is now known as blanson[m]11:06
rpittaubtw I'm talking about this http://a31ada860fc20a35932c-5da8dd525c228407ee4661a46790293d.ssl.cf5.rackcdn.com/openstack/999131c0941e4f1cae35ed71f9ab8b22/logs/ironic.log11:06
rpittaus/oslo.process/oslo.service11:08
rpittauooook ServiceLauncher silently ignored unknown kwargs :/11:09
rpittauuntil now11:09
rpittaugreat!11:09
rpittauwriting a fix11:14
opendevreviewRiccardo Pittau proposed openstack/ironic master: Fix singleprocess launcher compatibility with oslo.service 4.4+  https://review.opendev.org/c/openstack/ironic/+/96782111:18
*** mdfr8 is now known as mdfr12:37
rpittauif any core is around, this is passing CI ^13:21
opendevreviewRiccardo Pittau proposed openstack/bifrost master: [WIP] Remove tinyipa support and switch to debian IPA  https://review.opendev.org/c/openstack/bifrost/+/96440413:24
opendevreviewnidhi proposed openstack/ironic master: Add Redfish LLDP data collection support  https://review.opendev.org/c/openstack/ironic/+/96784113:35
opendevreviewnidhi proposed openstack/ironic master: Add Redfish LLDP data collection support  https://review.opendev.org/c/openstack/ironic/+/96784113:38
opendevreviewnidhi proposed openstack/ironic master: Add Redfish LLDP data collection support to the Redfish inspection interface.  https://review.opendev.org/c/openstack/ironic/+/96784113:43
opendevreviewnidhi proposed openstack/ironic master: Add PCIe function fields to redfish inspection  https://review.opendev.org/c/openstack/ironic/+/96317914:24
opendevreviewDmitry Tantsur proposed openstack/bifrost master: WIP add an OCI artifact registry  https://review.opendev.org/c/openstack/bifrost/+/96138814:27
opendevreviewnidhi proposed openstack/ironic master: Add PCIe function fields to redfish inspection  https://review.opendev.org/c/openstack/ironic/+/96317914:28
dtantsurrpittau: I'd prefer to wait for TheJulia to check that changes ServiceLauncher to ProcessLauncher is fine14:29
TheJuliaoh, hmmmmm14:31
TheJuliayou can't process launch the vnc code14:32
TheJuliait goes kaboom internally and won't work14:32
TheJuliaOnly real option is to remove the no_fork option, I guess14:33
TheJuliaThen again, I could likely stage it up here in a little bit and give it a spin to see if the vnc stuff works, or not14:33
rpittauTheJulia: ack14:34
rpittauremoving the no_fork option shoud work, it was ignored so far14:34
* TheJulia tears down and prepares to restack14:41
opendevreviewnidhi proposed openstack/ironic master: Add PCIe function fields to redfish inspection  https://review.opendev.org/c/openstack/ironic/+/96317914:48
TheJuliaOkay, should be pulling everything in fresh14:55
rpittauTheJulia: btw the CI is passing with that in bifrost https://review.opendev.org/c/openstack/bifrost/+/96440415:08
rpittaubut I agree we should probably just remove the no_fork option15:08
rpittaujust let me know if you want me to update the patch15:08
dtantsurI don't think that Bifrost is testing the VNC proxy15:10
rpittauyeah, but at least ironic starts now :D15:10
TheJuliarpittau: if you wouldn't mind just removing the no_fork option, that would be good. If I can get devstack to behave I can at least spin it up and test if the proxy service operates or not in that case15:15
rpittauTheJulia: sure, no problem! updating the patch now15:15
TheJuliaLooks like I'm finally re-stacking now15:20
opendevreviewRiccardo Pittau proposed openstack/ironic master: Fix singleprocess launcher compatibility with oslo.service 4.4+  https://review.opendev.org/c/openstack/ironic/+/96782115:21
clifcardoe TheJulia do y'all have any perspective on this if statement in NeutronVIFPortIDMixin.vif_attach ? It says neutron cannot have a host/instance connected to more than one physical_network at a time and enforces that requirement by raising an exception:15:48
clifhttps://opendev.org/openstack/ironic/src/commit/e75c8a4483b437eb98f5cb8089c8809bedb526bf/ironic/drivers/modules/network/common.py#L62315:49
clifdoes this still hold true in neutron? It would be a large hurdle for intended trait based networking operation otherwise15:49
TheJuliaSo, I think it might be for individual vif creation and mapping, but not across all vifs because you could have a hypervisor (or baremetal node for that matter) which bridges physical networks15:54
clifreading the logic more carefully it seems like it looks at if the vif being considered for attachment has more than one physical_network in common with existing node physical_networks15:59
clifI guess I should probably take this into consideration when planning the network operations and emit an error or warning at plan time if possible16:00
TheJuliaokay, no_fork doesn't work16:01
clifit would stink to get half way through the actions and then blow up because what was planned is not possible from the outset16:01
JayFiurygregory: Do you know how far the idrac 10 fixes got backported and/or perhaps have a handy-dandy list of patches that need backporting (if not upstream then downstream :D)16:02
JayFiurygregory: my folks are gonna have DRAC10 in a lab soon and will be trying to do vmedia on caracal, which is not gonna be a good time so I'm trying to help smooth it over :)16:02
clifbut that NOTE by mgoddard makes the physical_network restriction more dire than what is directly implied by the code itself16:02
TheJuliarpittau: your first change was good, second revision with no_fork was bad.16:03
TheJuliaI've not tried to fire up the proxy, but the code does internally execute past where it would have failed so I feel pretty good about the first revision you had.16:04
JayFclif: given that comment came in with the original physical_network mappign implementation, my hunch is that it's more about "if more than one match; I don't know how to configure it" than anything else16:04
JayFclif: which I think would not apply if TBN is doing scheduling instead of physnet matching16:04
TheJuliaJayF: ++16:04
clif"Neutron cannot currently handle hosts ..." is concerning16:04
rpittauTheJulia: I'll revert the revert!16:04
JayFNeutron will never know :D 16:05
cliflol ok16:05
JayFDon't tell them shhhhh16:05
JayF;) 16:05
JayFlol16:05
cliffair16:05
TheJuliaso internally, we *do* this backfill of physical network data into neutron and I'm pretty sure it is not blowing up, but then again maybe nobody has actually tried different physical networks16:05
JayFI think of it like a real restriction; you can't have a given interface on multiple networks16:05
TheJuliahey hjensas!16:05
TheJuliathis discussion might interest you!16:05
JayFand nothing TBN should be trying to put a single port onto >1 network16:06
clifyea that tracks16:06
clifI will proceed as if everything is fine and then worry if something blows up16:06
JayFI wonder if it goes the other way too16:06
JayFwhere if I have physical_network=foo on multiple vif/portgroups16:07
JayFbecaues in the real world that presents routing pain16:07
TheJuliaports in neutron, can end up on any physical network as long as the base network supports it, the physical network helps guide it to the supported network if applicable16:07
JayFI don't think we should guard against it, per se, but there are several ways someone could misconfigure themselves in TBN16:07
JayFand to some effect, that's on them :) 16:07
TheJuliain cases where overlays exist (like geneve, vxlan), then there is no concept of a physical network16:07
TheJulia(which... STINKS)16:07
TheJulia((but, I get why its modeled that way))16:08
TheJuliaphysical networks are more a provider network concept, fwiw16:09
JayF(((ergh, ok))) 16:09
JayF(I joked to clif yesterday that "ergh, ok" was the official frustrated exclamation of OpenStack)16:09
clifbecause I made it :)16:09
TheJulia((((We're likely some of the few people to be crazy enough to want to bridge overlays to physical networks....))))16:09
JayFwe're only [checks survey] 25% of all nova users16:10
JayFso not many /s 16:10
opendevreviewRiccardo Pittau proposed openstack/ironic master: Fix singleprocess launcher compatibility with oslo.service 4.4+  https://review.opendev.org/c/openstack/ironic/+/96782116:10
opendevreviewRiccardo Pittau proposed openstack/ironic master: Fix singleprocess launcher compatibility with oslo.service 4.4+  https://review.opendev.org/c/openstack/ironic/+/96782116:16
TheJuliaAlthough, it doesn't seem super happy about control-c :\16:30
TheJuliaThe odds of vnc in that single process case are a bit slim, it looks like the interrupt calls get overridden because of the way its launched and it never records to the main process16:31
rpittauwhat it's weird is that it was working before before it was actually mapped to ProcessLauncher, so it should just work?16:34
cardoeclif: but a machine shouldn't be on more than 1 physical_network at a time?16:34
clifI probably don't understand the semantics of physical_network. Why not? Can't a machine have multiple physical ports that are connected to different switches/networks?16:36
TheJuliarpittau: yeah, processlauncher should have launched it's own subprocess, now it's an all in one binary with threads running concurrently16:37
TheJuliatl;dr single-process + vnc is not an expected case by default and only works locally because I have it forced on.16:38
rpittauah well16:38
TheJuliaclif: as it relates to overlays or physical networks?16:38
clifeither one?16:41
TheJuliaoverlays because physical network concept doesn't exist in that16:43
TheJuliaeach hypervisor tunnels to each other16:43
TheJuliaand magic happens16:43
TheJuliaWith physical networks, those are provider networks, and a network once created and mapped to a physical network can only be distributed via a singular physical network, even if there is an overlap between the physical networks.16:44
TheJuliaits an address mapping/logical mapping in neutron constraint.16:44
TheJuliaWhen we bind, we know the lower level details as configured, and the intermediate level details beyond the vlan provider network being attached to a network fabric, are all sort of handwavey in neutron because it relies upon site/operator specific configuration.16:45
TheJuliarpittau: oh, you know what might not work, systemd might not detect it as running. In eventlety and entirely single threaded process models, it needs that call which they had no plans on implementing, but your leaning the service hard into pure single process with no manager of sorts16:52
cardoeclif: so what you linked before is routed networks or L3VNIs and that statement is absolutely correct. A host cannot be a part of multiple physical_networks at once in that case.17:53
cardoeclif: https://cardoe.com/neutron/evpn-vxlan-network/admin/data-center-networks.html#physical-layout here's a poor man's picture17:53
JayFHOWEVER those are not the only ways to get networks to Ironic today, right?17:53
cardoeThe code in the case of what you're talking about would be a server connected to both leaf switches in that picture at the same time17:54
cardoeAnd that cannot happen17:54
cardoeIn the wise words of Trey Parker and Matt Stone, that would be french frying when you should have pizzaed.17:54
JayFcardoe: the part that's not clear to me is if that is /a/ possible network architecture vs /the only/ possible network architecture17:56
cardoeAnd I actually take back my statement about L3VNIs and I'll amend it to using Neutron's terminology... L2 segmented networks and L3VNIs17:57
cardoeJayF: There's 3 network architectures possible.17:57
JayFcardoe: a little worried we're going to get tunnel visioned on a use case when the scheduling bits should (maybe?) be more flexible 17:57
cardoeHard stop17:57
cardoeUnless we wanna talk custom vendor things.17:58
JayFMy question is more if that spot in ironic is the place for us to say "don't do that" 18:00
JayFthat's what feels weird to me18:00
cardoeSo from a physical port binding I think Ironic should just care about EVPN type 2. Hard stop.18:01
JayFYou're not exactly answering my question though; at vif attach time, when TBN code activates, is that the right place for that check to be?18:01
cardoeBut Neutron won't allow that so we'll have to take into account EVPN type 5.18:01
cardoeYes that's the right place for that check.18:02
cardoeThat check is 100% wrong.18:02
JayF[blink]18:02
cardoehttps://bugs.launchpad.net/networking-generic-switch/+bug/211445118:02
JayFI feel like 90% of my questions get answered with a link to or description of that bug18:03
cardoeI wish it wasn't the case.18:03
JayFand for purposes of this question I'm trying to think about what *Ironic* should check 18:03
JayFand that all centers around neutron logic and modeling18:03
JayF(right?)18:03
cardoeYeah. Help me get them to fix stuff.18:04
cardoeSo easiest description...18:04
cardoeYou have an office with a lot of computers. So let's say you decide to use 192.168.10.0/24 for floor 1 and 192.168.20.0/24 for floor 2.18:05
cardoeIn Neutron terms, it's one office network so its 1 Neutron Network. Traffic can get between all machines.18:06
cardoeIt's 2 segments on that network. Hence that code that clif linked walking the segments.18:06
cardoePorts of computers that are on floor 1 would have a physical_network = floor 1 and floor 2 would be physical_network = floor 2.18:07
JayFthat check is in context of a single vif though18:07
JayFthat code only runs with a single vif coming in18:07
cardoeYep.18:07
JayFso if it shouldn't have >N physical_network, isn't it neutron's job to enforce that?18:07
JayFThat's why I'm struggling here, it feels like if there's a check to happen here it shouldn't be happening in Ironic18:08
JayFbecause that code is 100% just about mapping neutron ports to ironic port-likes18:08
cardoeoh. sure you can toss that check.18:08
JayFand there's no reason if >1 physical network is returned by neutron, because we should still be able to whiddle it down to one using TBN18:09
JayFor else TBN itself blows it up18:09
JayFall 18:09
JayF**all I'm trying to get to the heart of is what is our responsibilities at vif_attach time18:09
cardoeIf we fix https://bugs.launchpad.net/networking-generic-switch/+bug/2114451 then that check is unnecessary.18:09
JayFand it seems to be that check is WELL OUTSIDE them18:09
JayFI don't think that bug applies in all cases, but the check does18:10
cardoeWell lemme continue.18:10
cardoeSo simple setup of the floors correct?18:10
* JayF is trying hard to think about this as inputs/outputs to vif_attach without modeling the world :( 18:11
JayFyeah, N subnets in a single (v(x))LAN18:11
cardoeBut that's too simplified version of the world.18:11
cardoeThat's not how the real world is.18:11
cardoeYou've really got 10 switches on floor 1 serving up that 192.168.10.0/24 block.18:12
JayFI've literally run a lan configured like this, btw18:12
JayFexcept for data/voice subnets18:12
cardoeI mean I would hope so18:12
cardoeSo in the older aggr model of the world you'd just stretch a VLAN across floor 118:13
cardoeBut your overall traffic suffers18:13
JayFyes, but that does still represent a physical network design that exists and is used in openstack cloud contexts18:13
cardoeSo you actually put each of your 10 switches on different VLANs which are serving up 192.168.10.0/2418:14
cardoeIn terms of Neutron, each of those 10 switches are their own segment still18:14
JayFthis is where you lose me in terms of real world, this sounds bananas to me18:14
cardoeThat's how spine leaf works18:14
cardoeThe MAC addresses are registered as being part of 1 leaf so the spine traffic just gets directed where it needs to go ensuring your overall throughput.18:15
TheJuliaoh gawd lots of talking18:15
cardoeYour VLAN stretched across the entirely of floor 1 is really divided up.18:16
TheJuliaI just opened some wine for a beef stew I was starting, should I be pouring myself a glasS?18:16
TheJulia(and how do ya'll get that far into discussion on networking while I started a stew!)18:17
* cardoe shrugs.18:17
cardoeThis is why I'm struggling with this whole issue.18:17
cardoeCause I can never describe it concisely and everyone tells me I'm crazy.18:18
TheJulia crazy on what aspect?18:18
JayFI think you're describing *a* possible design, not *the only* possible design18:18
JayFand I'm really trying to keep a super generic hat on to help clif get past his specific TBN question18:18
TheJuliawe do a lot of what JayF notes18:18
TheJulia++18:18
* TheJulia summons the glasses so she can actually read18:18
cardoeJayF: I'm describing how VXLAN works.18:19
JayFwhich at this point sounds like the answer is: that check is not valid in all cases, and even when it's valid it's not in the correct place, so we should remove it and the broken cases (which are seemingly already broken in some ways) stay broken until 2114451 works18:19
JayFcardoe: I don't use vxlan. 18:19
cardoe$5 says you do.18:19
JayFI don't have root on any machine with vxlan connectivity, $50000 says that's true ;) 18:20
TheJuliaOR he *does* and his part of the network doesn't matter because they are doing the handoff18:20
JayFyes18:20
JayFyes yes yes18:20
JayFit's not ironic's problem18:20
TheJuliaBottom line, if one wants dynamic vxlan stuffs, thats a lot18:20
JayFthe vlans are pre-curated by network team18:20
JayFIronic's job is to decide how many ports to bond together and what vlan to put them on18:20
TheJuliaif everyone is willing to pre-curate the awfulness of vlans, its okay18:20
* TheJulia has glasses, and pins her hair back to begin reading from the beginning18:21
cardoeI mean this issue that I'm describing right here is what causes Zuul jobs to go squirrelly18:21
cardoeCause Zuul uses vxlan to setup the network that the nodesets are part of.18:21
TheJuliaas an overlay to serve as an underlay for the devstack jobs18:22
cardoeYep18:22
cardoeBut this same "use the IP to find the segment" bug exists there.18:22
cardoeAt least in all the OpenStack Helm jobs.18:22
cardoeWhat nodeset segment am I on? lemme use my IP address to look this up....18:23
JayFYeah, I think part of our comms disconnect is how far into the stack I have openstack worrying about18:23
* TheJulia sees we went way off the rails putting topics into a blender18:23
JayFlike I said, network team owns everything above the instance ports here, so any dynamic craziness is preconfigured as essentially business logic into ngs configs18:23
cardoeMaybe. I suspect you'll have push back.18:25
cardoeCause that's not efficient.18:25
cardoeunless you're dealing with a very small number of networks18:25
JayFWe are.18:25
JayFSmall number of networks at extremely high throughput and low latency18:25
JayFwith eye-watering amounts of transit to each individual server18:25
clifso if that check in vif_attach is wrong I can safely ignore it and carry on? and maybe rip it out as part of ongoing tbn work?18:26
JayFclif: I'd suggest ripping it out for now and expecting that to be a hot topic in code review :) 18:26
cliflol ok18:26
clifwhen it comes to network stuff I know enough to be dangerous but I'm not an expert at all18:27
cardoeI've had to learn more then I ever cared to know.18:27
TheJuliaCan we have a meeting that is called "The Topic Frappe" ?18:27
JayFI know a TON about low level networking on linux in 2005.18:27
JayFToo bad time passes and everyone overlays 5000 things on top now18:28
cardoeJayF: I promise you if you're dealing with high throughput stuff this segment stuff is gonna matter.18:28
JayFI don't care if it matters; I care about answering the specific question about that block of code for TBN18:29
JayFI am not hungry enough to eat the whole (proverbial) elephant today, I just want a couple of bites :)18:29
TheJuliaAt the level which you are/will/can integrate today.18:30
TheJuliaWhich is separate from a desirable future state18:30
JayFyeah, I can never fully understand the picture in enough detail to reason about it, so I try to zoom in and solve it a bite at a time18:31
JayFTBN is one of those bites18:31
cardoeSo the issue really is that the implementation of get_physnets_by_port_uuid() is wrong18:32
cardoeIf https://bugs.launchpad.net/networking-generic-switch/+bug/2114451 was fixed18:32
cardoeThat function could only ever return 118:33
cardoeI've already submitted a patch to remove the check.18:33
cardoeWell how about this... +W https://review.opendev.org/c/openstack/ironic/+/96457018:34
cardoeAnd that gets me one step closer to fixing it18:34
cardoehttps://review.opendev.org/c/openstack/ironic/+/952168 that's where I attempted to get rid of the check as much as possible18:35
TheJuliaexcellent! I'll try to review soon. I'm having to chase down a what if question for downstream stuff right now18:37
cardoeWell that last one is a -1 from me18:38
TheJuliaon your own change?18:38
cardoeYes.18:38
TheJuliaThere was a reason, fair enough18:39
cardoeI don't have that env setup anymore but that was the pure NGS environment.18:39
cardoeNeutron would plug the port in correctly to the server on a VLAN network (I wasn't doing VXLAN in that env... I mean I was it was like what JayF's describing that the network folk setup it up for us and we just used the VLANs)18:40
cardoeBut then when you tore the server down Neutron and Ironic get out their dart board and pick a random switch to tell NGS to disconnect from. And things go sideways.18:40
TheJuliabecause switches are modeled in that as well? (which is valid, but I guess there was overlapping in the switch selections?)18:43
cardoeBecause it calls unplug_port_from_segment18:48
cardoeWhich reads the VLAN from the segment and the switch to talk to from the port18:49
cardoeAnd it walks the segments to clean up and it has no clue what the relationship is between segments and switches.18:49
cardoeWait sorry other way around18:50
cardoeit walks the switches and does list_of_segments[0]18:50
cardoeSo the check prevents you from attaching to a vif that would have len(segments) > 118:51
TheJuliathat makes sense then18:55
TheJulia(since you shouldn't... really.)19:58
cardoeJayF: let's just put it another way... the point of that check is that at detachment time you won't know the physical_network that the attach happened with... that's why it's preventing that... https://review.opendev.org/c/openstack/ironic/+/964570 puts the physical_network into the port's binding_profile at attachment time so then you know it at detachment time20:46
cardoeSo even if neutron hands us back a list, we can find the right one.20:47
JayFTheJulia: how would you feel about: if node.provision_state in _UNPROVISION_STATES or node.provision_state not in ironic_states.PROVISION_STATE_LIST (https://github.com/openstack/nova/blob/master/nova/virt/ironic/ironic_states.py#L178 ): do the undeploy21:30
JayFTheJulia: that might be the backportable fix that would also solve us going forward21:30
JayFwithout moving fully to a try/fail model21:30
TheJulia.... maybe.... I need to click the link in a minute21:48
TheJuliaI *think* that would cover it21:53
TheJuliabecause it would still trigger on the last21:53
JayFI'm going  to propose it22:09
TheJuliaI was thinking about what cardoe was indicating yesterday, no logs (hint hint), I think a thing we might want to consider is actually considering blocking for some updates and just letting a thread wait until it can have a lock22:57
TheJuliawhich is bad, but... maybe it is needed? :\22:57
TheJuliaAlternatively "deferred tasks"22:57
cardoeI pasted no?23:09
cardoeI'll grab them again23:09
cardoeuwsgi and ironic question...23:09
cardoeOpenStack Helm sets the number of uWSGI processes equal to api_workers. Is that correct?23:09
JayFI have an email from a DMTF member requesting feedback on Redfish specification. I suspect I'm not the best person to give full feedback to them. Is someone else interested in having a chat?23:10
cardoejanders is probably your best person.23:11
cardoeTheJulia: you wanted nova-compute-ironic?23:13
TheJuliaJayF: I might be willing, but do sort of concur janders might be a good candidate.23:15
TheJuliacardoe: huh?!23:15
cardoethe logs23:15
JayFI'm mainly seeing if they want a chat or if they wanta  document, I'll rope TheJulia/janders in when it gets to rubber-hit-road23:15
TheJuliaOH, ironic-conductor, ironic-api. I'm 90% sure nova-compute did the right thing but I'd happily look at the logs too.23:16
TheJuliaJayF: I wouldn't want to overload them, but truthfully maybe some solidified community feedback from more than one person might be most impactful, or just go "hey, there are several people you might gain insight from..."23:17
JayFMy preferred model would be an etherpad, probably with split experiences labelled by person, with a sync chat with one or two of us (different companies ideally) to go over it23:17
TheJuliasince nova-compute was in the drivers seat, likely best to frame it that way23:17
TheJulia++23:17
TheJuliaYeah, that is likely for the bst23:17
TheJuliabest23:17
TheJuliaIts not mraineri is it? 23:18
JayFNo.23:20
JayFRFR for ironic driver fix: https://review.opendev.org/c/openstack/nova/+/96794123:23
cardoewhelp... I clearly don't know how to copy and paste out of grafana... it looks like turds... https://gist.github.com/cardoe/b0aefe21b1fc7b81c38bed8dad8e14b223:28
JayFcardoe: we used to have a rule in the cluster @ Yahoo and the cluster @ RAX Cloud Monitoring which could detect tracebacks and assembled them together into one actual logline for the dash23:28
JayFcardoe: it was very nice23:28
JayFcardoe: (note: those places used splunk and greylog respectively, so I have no tech help to offer lol)23:29
cardoeyeah that's something we need to do.23:29
cardoeIn the prior setups it did that.23:30
cardoeSo any thoughts on uWSGI and api_workers?23:34
* JayF has zero information on that for ysa23:34
cardoeI already found a place they diverge with what Neutron wants.23:34
TheJuliaThere is tons of disagreement out there regarding workers23:34
TheJulia... Personally, you want enough workers to serve the requests. Technically, I think each worker should scale but some others think it shoudl be single threaded off the worker. There is a launch cost to the worker but most don't want to pay that over and over23:36
JayFfwiw as a follow-up to that ironic-driver fix, I might split the constants defining our states from the actual-state-machine-code, into a separate file, and rework the nova side driver to use a copied-over version of that file23:41
cardoethen maybe as a follow on put it into a separate package to make it versioned... maybe call it ironic-lib?23:51
* cardoe sees himself out.23:51

Generated by irclog2html.py 4.0.0 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!