Wednesday, 2024-09-11

*** mhen_ is now known as mhen01:30
opendevreviewMerged openstack/whitebox-tempest-plugin master: Verify vTPM creation after svc restart  https://review.opendev.org/c/openstack/whitebox-tempest-plugin/+/92730602:39
opendevreviewMartin Kopec proposed openstack/tempest master: Drop centos 8 stream jobs  https://review.opendev.org/c/openstack/tempest/+/92315207:29
opendevreviewMartin Kopec proposed openstack/tempest master: Parametrize target_dir for the timestamp  https://review.opendev.org/c/openstack/tempest/+/91375707:57
opendevreviewMartin Kopec proposed openstack/tempest master: Add releasenotes page for version 40.0.0  https://review.opendev.org/c/openstack/tempest/+/92888608:03
opendevreviewMerged openstack/tempest master: Fix AttributeError with 'SSHExecCommandFailed'  https://review.opendev.org/c/openstack/tempest/+/92742409:33
opendevreviewMartin Kopec proposed openstack/tempest master: Parametrize target_dir for the timestamp  https://review.opendev.org/c/openstack/tempest/+/91375711:19
*** whoami-rajat_ is now known as whoami-rajat14:04
opendevreviewTakashi Kajinami proposed openstack/devstack master: Create s3 endpoints in swift  https://review.opendev.org/c/openstack/devstack/+/92892614:36
opendevreviewTakashi Kajinami proposed openstack/devstack master: Create s3 endpoints in swift  https://review.opendev.org/c/openstack/devstack/+/92892614:39
opendevreviewTakashi Kajinami proposed openstack/devstack master: Create s3 endpoints in swift  https://review.opendev.org/c/openstack/devstack/+/92892614:40
opendevreviewTakashi Kajinami proposed openstack/devstack master: Create s3 endpoints in swift  https://review.opendev.org/c/openstack/devstack/+/92892614:46
fricklerclarkb: fungi: another failure that looks related to raxflex, likely because there is no IPv6 available there https://zuul.opendev.org/t/openstack/build/7d06160f4a0d4ea180378da764f9b66114:46
fricklerwas discussed in the neutron channel earlier, but I'm pretty sure it needs a fix in devstack. will take a closer look tomorrow unless one of you is faster14:48
opendevreviewTakashi Kajinami proposed openstack/devstack master: Create s3 endpoints in swift  https://review.opendev.org/c/openstack/devstack/+/92892614:48
fungidon't we have any other providers without ipv6 routed?14:48
fricklerI don't think so. one could argue that nowadays nodes without IPv6 are broken and we should defer raxflex usage until this is fixed. but I still think it is a bug in devstack to have that assumption baked in14:52
clarkbinmotion has no ipv614:52
clarkbovh has it but I'm not sure if we configure it because for the longest time there wasn't the necessary info available in config drive14:52
clarkbyou had to get the details from the neutron/nova api directly and then configure things statically (no RAs either)14:52
clarkbI think that may have changed transparently for us at some point but I haven't confirmed it14:53
JayFIs it possible that Rackspace is forced disabling IPv6 at a kernel level?14:54
JayFI know it was practice to do that when I worked there in some environment14:54
clarkbwe control the kernel14:54
JayFGood to know14:54
fungiyeah. the kernel command line and kernel package are all part of the images nodepool/dib builds for us14:55
clarkbI think we enable things by default since a cloud that only does RAs and no config drive should work too14:55
opendevreviewBrian Haley proposed openstack/tempest master: Wait for instance ports to become ACTIVE  https://review.opendev.org/c/openstack/tempest/+/92847114:58
clarkba random stackoverflow question answer says that debian bookworm complains at time when you edit the interface live (eg after it is up'd)15:03
dtantsurthat would explain the randomness15:08
clarkbthis is an ubuntu jammy node though15:09
clarkbrather than being ipv6 related since I'm pretty sure inmotion at least is in the same boat could it be network device type?15:10
clarkbsimilar to what we saw with ephemeral and swap devices being confused perhaps we've got a different type of network device and that has different behaviors?15:10
opendevreviewIhar Hrachyshka proposed openstack/devstack master: Dump sysctl in worlddump  https://review.opendev.org/c/openstack/devstack/+/92892915:10
clarkbmodule: virtio_net is what is in use on that job and the mtu is small: 1442 https://zuul.opendev.org/t/openstack/build/7d06160f4a0d4ea180378da764f9b661/log/zuul-info/host-info.controller.yaml#38615:13
clarkbcould it be that you are trying to set a larger mtu than can transit that "phsical" dvice?15:13
clarkbtrying to find an example from a kvm host on a different cloud next15:13
clarkbfrom a random job https://zuul.opendev.org/t/openstack/build/a5d6e96a8ba74b2cbfdf43a8d298c8d8/log/zuul-info/host-info.primary.yaml#335-336 this is openmetal (sorry I kept saying inmotion its too early to get names right, I meant openmetal) without ipv6 but the mtu is 150015:15
clarkbso thats my best guess at the moment that the MTU is the problem15:15
clarkbthats the same ubuntu jammy test node type using the same virtio_net kernel module in a different cloud but also without ipv615:16
JayFthat is a pretty good thought15:20
haleybclarkb: one thing (bug) we have seen in neutron recently is when the MTU is below 1280 IPv6 config fails, but this should be well above that15:21
clarkbalso note that both interfaces have link local ipv6 so we haven't killed ipv6 in the kernel15:22
clarkbhaleyb: ya its 1442 which should be plenty of headroom for a couple extra layers of vxlan nesting :)15:22
fricklerhumm, having MTU < 1500 for tenant networks is also a bug IMNSHO15:23
clarkbI would say lack of global ipv6 and smaller mtus are not ideal, but things should work regardless15:25
clarkbI mean I don't have native ipv6 from my isp yet15:25
haleybi think you'll usually see 1450-ish in a VM unless you have a jumbo overlay, and that's been ok for years15:25
clarkband smaller mtus are common with dsl aiui15:25
JayFthose assumptions get less true with Ironic, I think. I've gotta look but I think there's a bug around MTU path discovery in OVN or OVS? /me needs to look at notes to remember15:25
JayFthis is likely why Ironic breaks: I suspect we're manually setting an MTU somewhere15:26
clarkbthis is still good feedback that we can give rackspace, but I wouldn't say its broken just not ideal and the jobs should handle it becaus you don't know what people will have on their laptop at home. Its perfectly valid for my local reproducer at home to have no ipv6 and a smaller mtu15:26
JayFWhile we are figuring out how to make the Ironic job happy on flex, is there something we can do to reduce the impact to our CI in the meantime?15:26
clarkbJayF: set the job to non voting maybe?15:27
clarkbyou can't easily exclude a cloud from running your jobs (the only way you can approximate it here is to use the nested virt label since raxflex doesn't participate in that but likely will soon)15:27
clarkband I don't think we should turn off a cloud uintil we have evidence it is doing something wrong and I don't see any evidence of that yet15:28
dtantsurThat's not one job, it's many devstack jobs failing at random15:28
clarkbdtantsur: are they all ironic devstack jobs? neutron is supposed to find the smallest mtu and calculate what its overlay should be15:28
dtantsurI think so15:29
clarkb(to address this specific issue because this isn't the first time we've had cloud resources with small MTUs)15:29
opendevreviewJames Parker proposed openstack/whitebox-tempest-plugin master: Update docstring for VirtQEMUdManager  https://review.opendev.org/c/openstack/whitebox-tempest-plugin/+/92893015:29
clarkb(but also again it makes things work on laptops in a home environemtn where we don't have as much say on MTU size)15:29
dtantsurwe definitely have a ton of MTU logic in our devstack plugin (TheJulia may have more context on it)15:29
clarkbalso this is almost entirely a problem of our own design :) if openstack didn't use cloud overlays so aggressively then it would be less common to cut into the MTU size15:30
dtantsurhttps://opendev.org/openstack/ironic/commit/cf074202e50426365c761326c8d2ccfcce4ad916 and https://opendev.org/openstack/ironic/commit/40e825ba93c8fb0bcc2d9ef0428b64a46286e0c015:30
clarkbsomething something nova-network15:30
dtantsur:D15:30
clarkbdtantsur: ok we should look at the logs from the failing job to see if those variables are calculated properly15:31
clarkbhttps://zuul.opendev.org/t/openstack/build/7d06160f4a0d4ea180378da764f9b661/log/job-output.txt#2557-256015:32
clarkbthat value is smaller than 128015:32
dtantsuryeah, it's the ironic calculation https://zuul.opendev.org/t/openstack/build/7d06160f4a0d4ea180378da764f9b661/log/controller/logs/devstacklog.txt#65615:33
dtantsurif I only remembered why we did that...15:33
clarkbthe comment says you are handling ipv6 tunnels which use an overhead of 100 bytes vs vxlan's 5015:34
clarkbI don't think we use ipv6 tunnels in CI unless ironic is doing that themselves (the default CI overlay networking is all vxlan beacuse it works everywhere)15:34
clarkbmaybe local_mtu should try something like max(calculated_value, 1280) to see if the issue haleyb calls out is the problem?15:35
dtantsurthis is the point where I can only drop on the floor, cry and hope that TheJulia is around15:35
dtantsurnetworking is not my strength :(15:35
dtantsurbut I can try the max() logic sure15:36
JayFit'd be hilarious in an incredibly sad kinda way if we run outta room in the MTU15:37
JayFdtantsur: I think clarkb and haleyb hit the nail on the head, I would +2 such a change (assuming it passed CI)15:37
dtantsurlet's see https://review.opendev.org/c/openstack/ironic/+/92893115:38
JayFI was reading thru scrollback and read the number we were setting and thought "that's too low for linux to accept" before I even saw the earlier comments15:38
JayFwe have the weirdest problems :)15:38
clarkbPPPoE is the standard that often results in smaller MTUs in the home network environemnt (fwiw)15:38
clarkband is often used with dsl15:39
* dtantsur does not miss PPPoE15:39
haleybwith just ipv4 you can set mtu to 576 i think, but i did just double-check that setting to 1279 and adding an IPv6 address fails with the EINVAL error15:40
clarkbhaleyb: thanks! that is probably the issue then15:40
haleybPUBLIC_BRIDGE_MTU=1272 - oh, yuck, yeah that's probably it15:42
haleybtoo bad the message from /sbin/ip isn't more helpful15:42
clarkbI'll just have to file that away into the esoteric but useful knowledge bin and hope i remember it next time15:44
haleybat least it has moved out of the neutron tribal knowledge bucket, i literally have my third patch up fixing this issue in neutron, we already changed the API to return a 40x if the mtu is too low for the address family15:47
JayFI wouldn't assume because Ironic is in on it that you've expanded the family much ;) 15:48
JayFI think Ironic and Neutron have always been closer than many other projects since we share a lot of similar problems + ngs / nb ironic projects15:48
haleybneutron + 1 > neutron is all i can hope for :)  if that "- 100" changed to be more specific, like IPv6 + Geneve it would probably solve this, let me just double-check that number15:53
JayFI think we're probably OK with going lower than needed now that we have a minimum to ensure no breaking15:53
JayFI still think there's a piece at work here that Julia was mentioning around needing an even lower MTU than makes sense in OVN use cases15:54
haleybso with OVN, the minimum geneve header is 38 bytes, so adding IPv6 (40) makes it 7815:56
haleybfor example, if you spun-up a multi-node devstack with OVN, IPv4 overlay puts tenant mtu at 1442, IPv6 overlay at 1422, which jives with what i've seen15:58
TheJuliadtantsur: huh what?!16:24
TheJuliadtantsur: sorry, heads down in a narly backport down to train, whats up?16:24
dtantsurTheJulia: I think we solved it, sorry for bothering16:24
dtantsurtl;dr the MTU logic drove our MTU too low to be usable16:25
TheJuliain what case?!?16:25
dtantsurif my patch works, we're saved. if not, we're.. well.. not16:25
TheJuliawhere?!16:25
TheJuliaokay16:25
TheJuliaif it doesn't, do we start a streaming drinking party?16:25
* TheJulia is game, has outdoor TV and 4k camera and everything.16:25
dtantsurI think it will be are only option :D16:26
TheJuliaExcellent!16:26
TheJuliaplease summon if it doesn't work, in the mean time, I'm teleporting my brain back to the land of Train16:26
dtantsurchoochooo!16:26
TheJuliachooooo chooooo (Sign when you've lost remaining sanity!)16:27
clarkbdtantsur: haleyb JayF TheJulia so one thing to consider is taht vxlan is 50 bytes. Ironic is doing overlays to simulate a network for testing purposes. You don't need to support every option available including ipv6 + geneve. You just need one that works for CI. Why not use vxlan consistently since it is small and has worked for years?16:43
clarkbthat said if you can reduce the -100 math to -78 that may be good enough too16:43
clarkbmy point is more that unlike a real cloud deployment that may want to support different tooling to accomodate different environments we can be highly opinionated in what we use for our test specific overlays and one criteria may be to choose the most byte efficient option to avoid problems like this16:44
JayFI'm game to have this discussion, but we need Julia to have time to contexualize it and participate16:44
JayFI believe some of this was added specifically for our OVN job, but I'm not sure16:44
TheJuliaIf memory serves, when OVN is in the mix it gets encapsulated again and we need to artificially deflate the networking because otherwise it assumes it has 1500 bytes at the wire when with virtual networking it does not16:45
haleybclarkb: ipv4 + vxlan is 50 bytes, ipv6 + vxlan is 70, ipv6 + geneve is 78 which is where i got that number from16:45
clarkbhaleyb: got it16:46
TheJulia(i.e. OVN always assumes it has a bare interface, which is kind of bonkers, but I can see where they came from)16:46
clarkbTheJulia: oh you can't tie an OVN interface and a tunnel interface together with a bridge or some sort?16:46
clarkbanyway I think it is worth considering using the smallest possible tunnel option when building a fake l2 network for multinode testing and that would be vxlan aiui16:47
TheJuliaclarkb: you can, it just doesn't grok the mtu is anything less16:47
clarkbwe don't actually need to support different tooling as long as the other different tooling can run over the top of that opinionated overlay16:47
clarkbTheJulia: right you have to manually configure the MTUs lower because without an l3 interface/device there is nothing to respond with icmp framgentation packets16:48
TheJuliawell, the bottom line is it doesn't know how to do the reduction in size so we artifically knock the mtu down so the host configures itself because ovn, at least when we looked, entirely lacked support to say "use a smaller mtu"16:48
TheJuliawhich again, is bonkers16:48
clarkbsince we're joining many l2 connection together we have to manually configure the lowest mtu across all of them16:48
TheJuliaI have a list of bonkers things16:48
TheJuliasomeplace...16:49
clarkbright that is an l2 vs l3 problem and it affects all the things not just ovn16:49
TheJuliayeah, but OVN prevents the ability to discover it out of the box16:49
clarkbbecause fragementation responses rely on icmp which rely on having an ip address to send them from which implies an l3 interface16:49
TheJuliayup16:49
TheJuliawhich OVN in some cases just doesn't *really* have16:50
clarkbright this is true with neutron and ovs or linux bridges too16:50
clarkband it just means we have to manually configure the appropriate min mtu value on all the devices16:50
TheJuliawell, in those cases, you do have the interfaces with the real bindings so it inherently just sort of works16:50
TheJuliabecause your networking node, or at least attached namespaces resply16:50
clarkbnot in the CI setup16:50
TheJulia"oh, your mtu is too big!"16:50
clarkbor with neutron just out of the box16:51
clarkbwe struggled with this for years until finally neutron implemented management of mtus explicitly on all the devices iirc16:51
TheJuliait requires explicit configuration16:51
TheJuliayup16:51
TheJuliaso when configured, it got asserted and magical happiness and joyous sea shanties ensued 16:51
clarkbanyway since we can control the CI specific overlay setup we can do thinsg like use vxlan over ipv4 as a rule16:51
clarkbnow I think it is fair to say maybe we should also allow for vxlan + ipv6 since people may want to run this locally as well16:52
clarkbso we could do a 70 byte subtraction for each layer and then get back 30% for each layer which should be enough headroom16:52
TheJuliawell, you can't control the ovn interaction side of it because it doesn't grok the lower mtu on what it asserts16:52
haleybplease use IPv6 + Geneve or OVN won't work16:52
clarkbhaleyb: why can't you tunnel that over a vxlan tunnel?16:53
* TheJulia steps away due to the corgi attempting to sign the bark chain16:53
clarkbthat was my earlier point ovn and geneve shouldn't care if you give them an interface that just happens to be part of a bridge with a vlxan tunnel to another node16:53
clarkbthe current multinode network overlay setup for zuul jobs uses ipv4 + vxlan16:54
haleybclarkb: are you talking about native encapsulation? or is this about just shoving packets into an overlay?16:54
clarkbthat presents an interface to the jobs on each node in the job with an mtu of host mtu - 50. Then you run ovn or whatever else you want against that interface and potentially reduce the mtu further to handle the extra layer of nesting16:54
haleybif you have ipv4 + vxlan it will work I suppose, but there will be fragments16:54
clarkbhaleyb: I'm talking about multinode testing in zuul being given a fake l2 network to make the test setup look more like reality since we can't provide them actual l2 networks like that from the cloud providers16:55
clarkbhaleyb: why would there be fragments if you know the parent mtu is 1450 for example and then geneve + ipv6 needs another 78 bytes you'd configure the innermost neutron overlays to use 1450-78 byte mtus16:55
TheJuliaclarkb: my context https://github.com/openstack/ironic/blob/master/doc/source/admin/ovn-networking.rst#maximum-transmission-units16:56
clarkblet me see if I can find the old docuemtnation for this in devstack16:56
haleybok, so a zuul overlay network16:56
TheJuliaI've not dug into the bug recently to see if there are any updates16:56
clarkbhaleyb: essentially yes, and my undersatnding is that ironic configures one of these because they are doing fake baremtal vms that should all live on the same "physical" network16:56
clarkbessentially we've got two layers here. The outmost is approximating the phyiscal cables between hosts in a virtual environemtn. Then you've got the inner tenant isolation cloud network overlays that run over that16:57
clarkbwe all get confused because the outer layer is actually an overlay layer too because we don't have physical cables betwene hosts nor do we have fancy control of tenant networks in half the clouds16:57
JayFthe OVN bug Julia points at I think is a primary root cause of this being more nonsensical in Ironic16:58
clarkbI don't think that is actually an ovn bug16:58
clarkbits just how icmp and mtu fragmentation work and unfortuantely it requires us to work around it16:58
haleybclarkb: ok, makes sense more now, thanks16:58
JayFhttps://github.com/ovn-org/ovn/blob/main/TODO.rst disagrees clarkb 16:58
JayFat least they count handling fragmentation on outgoing packets as a todo16:59
JayFI don't know enough about OVN to know how many layers are there; but it's directly in that projects' todo doc16:59
clarkbJayF: but you have to manually configure it anyway is my point16:59
clarkbyes they may not implement the actual icmp fragmentation protocol, but that doesn't matter because you're going to have to manually configure things if you have any l2 only devices16:59
haleybthat TODO might be a little out of date if you ask me, but there are some issues where we've not seen a packet-too-big where we expect it16:59
clarkbhaleyb: https://opendev.org/openstack/devstack-gate/src/commit/9cfd5cca0a3b1dbfe8f1fefd836942d20425f172/multinode_setup_info.txt here is the really old docs on this16:59
clarkbJayF: in the case of neutron + ovs or linux bridge we have had the same issues in CI because you end up with like one l3 device for every 5 l2 devices17:00
clarkband the only way to make that reliable is to manually configure the lower mtu across the board and not rely on icmp and automatic fragmentation17:00
JayFI think that's what we were trying to do17:00
JayFwe just set the limbo bar lower than the floor17:01
clarkbyes. And my point is we can optimize it even further by using vxlan because it is lighter weight17:01
haleybhttps://github.com/openstack/neutron/blob/master/doc/source/ovn/gaps.rst - see the section on fragmentation/path mtu there, which can lead to tenant packet issues, but i digress into the weeds17:01
clarkbwe don't need to support ovn + geneve at that layer as long as ovn + geneve can overlay on top of the original overlay so we end up with 50 bytes + 78 bytes overhead instead of 78 + 7817:01
clarkbas a side note discussing this stuff is about 100x easier with the ability to draw pictures. Text makes it difficult17:02
haleybyes, i would agree that ipv4+vxlan should be fine for your virtual overlay, then anything on top should fit in without going below 128017:03
JayFyeah,  I am having trouble tracking it, but TBH Ironic <> Neutron stuff is a weak point for me I need to beef up on anyway17:03
clarkbhistorically I think the main thing people get confused about is that we're typically ending up with two distinct layers of networks in the CI jobs. The outer layer is a set of overlays that may or may not use the same overlay tech as the "workload" networks but its job is to be there and ensure l2 access amongst the test nodes approximating them all being physically connected17:05
clarkbon the same switch17:05
clarkbthen we have the "workload" networking layer which may or may not run over that "physical" layer whcih is providing all of your tenant networks to your VMs or baremetal devices17:05
clarkbthe "physical" layer is test/CI specific and we don't need to support more than one tool/technology there.17:06
JayFthe place where it all turns to mush for me is actually in the deepest layers17:06
JayFI've done a lot of networking from a "managing a linux-based load balancer/firewall" perspective but less from the "magic cloud-y switch" perspective :D 17:07
clarkbhonestly a lot of the old school stuff maps over well. You have bridges as switches and veth pairs as cables. Most of the problems arise from having a bunch of different implementations for all of these things whcih sometimes don't play nice together or with standard tooling like tcpdump17:09
clarkbhaleyb: can you tcpdump ovn interfaces or is it like ovs and you have to set up a tap device to bridge between standard networking tooling and the special stuff?17:10
JayFwell contributing to this is I've literally never worked a place with an openstack cloud that used an upstream-style ironic+neutron networking solution (well, until $curJob, but I'm not really operationally involved at all in that cloud) :D 17:13
haleybclarkb: well there is ovs-tcpdump which makes things a little easier17:14
clarkbI guess that is an improvement. Personally I've always found it useful to do something like tcpdump -i any for 30 seconds then read through the capture to see what is happening and ovs in particular breaks that and forces you to understand how things work beacuse you can use that as a method to undersatnd how things work17:16
JayFI bet that is more helpful in the devstack case. Working on edge servers I often found the signal:noise in a tcpdump was too bad to be useful (especially when the customer won't accept 'your firewall is randomly sending us tcp resets, what are we supposed to do?!' as an answer to their support ticket :D ) 17:17
clarkbJayF: typically I capture all the things then you can use a tool like wireshark to filter and see different views like "what did dhcp do" or "was there any http traffic" and so on17:18
clarkbbecause yes just looking at a huge raw packet capture all at once is difficult17:19
clarkbbut if you have all that data then you can start filtering and piece together where things are moving across the different intrfaces to say negotiate dhcp17:19
JayFback when I was doing this as my day job (we're talking like, 2008-ish), I actually loved the microsoft tool for viewing pcaps, it was way faster than wireshark, at least on windows :D 17:19
haleybwhat, you can't pick a needle out of a 10G pcap haystack with tcpdump? :-p17:20
opendevreviewMerged openstack/tempest master: Add releasenotes page for version 40.0.0  https://review.opendev.org/c/openstack/tempest/+/92888617:30
TheJuliaclarkb: so, the underlying issue is the linux kernel routing code drops pmtu packets if it can't address them back to the source, that was free with OVS with the model of a network namespace being involved where the routing code would come into effect17:37
* TheJulia is barely following the thread17:37
clarkbthat may be new functioanlity in ovs then? We definitely had to manually explicitly set mtus on all the things with ovs and also with linxu bridge17:37
clarkbbecause that would only happen if every interface was configured with an l3 address allowing it to icmp properly17:38
clarkbI'm pretty sure that neutron manages all of this explicitly now as a result17:39
clarkbI left a comment on dtantsur's change basically suggesting instead of 100 bytes trimmed off you use 78 to accomodate geneve + ipv6 then that will get you more than 1280 bytes for the mtu in the current situation. Also suggested leaving a comment there that 1280 is the min for ipv617:42
TheJuliaoh, if there is a mismatch and the parent layers dropped stuff, it could not just magically work, thus requiring explicit config to know17:42
TheJuliathe more layers, the more places stuff can get dropped at17:42
clarkbI sent an email to rackspace about the issue just to keep them in the loop and I think this is valuable feedback for public clouds if people hit issues even if they can't fix it immediately or at all18:45
opendevreviewGoutham Pacha Ravi proposed openstack/devstack-plugin-ceph master: Skip tempest image format tests  https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/92830318:51
opendevreviewMerged openstack/whitebox-tempest-plugin master: Update docstring for VirtQEMUdManager  https://review.opendev.org/c/openstack/whitebox-tempest-plugin/+/92893019:18
opendevreviewSreelakshmi Menon Kovili proposed openstack/whitebox-tempest-plugin master: Discard the cpu-0 from dedicated set  https://review.opendev.org/c/openstack/whitebox-tempest-plugin/+/92764120:42

Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!