jrosser | so it looks like the proxy job was broken with https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/889934 | 06:04 |
---|---|---|
jrosser | and unfortunately it does not run on that change | 06:04 |
jrosser | i suspect it's that we don't set `environment:` here https://opendev.org/openstack/openstack-ansible/src/branch/master/playbooks/os-keystone-install.yml#L43-L52 | 06:05 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible master: Apply deployment env vars during keystone main_pre https://review.opendev.org/c/openstack/openstack-ansible/+/894004 | 06:09 |
jrosser | noonedeadpunk: did you follow the octavia discussion yesterday..... looks like my br-octavia/br-lbaas change is pretty incorrect | 07:33 |
jrosser | just different brokenness /o\ | 07:34 |
noonedeadpunk | ah, no, I was already out | 07:37 |
farbod | hello Farbod again | 07:40 |
jrosser | br-lbaas cannot be both from systemd-networkd and ovs at the same time | 07:40 |
jrosser | but would not exist from ovs until neutron role runs | 07:41 |
jrosser | and controller may never have ovs at all with seperate network nodes | 07:41 |
farbod | First thing what IRC client do you suggest for linux ubuntu? | 07:42 |
jrosser | farbod: in the past i used irssi+screen, but eventually now i use irccloud for more convenience | 07:42 |
farbod | Thanks | 07:43 |
noonedeadpunk | jrosser: but systemd can create it as ovs bridge? | 07:43 |
jrosser | well it can | 07:43 |
jrosser | but like i say on my controllers there would not be an ovs at all | 07:43 |
noonedeadpunk | mhm | 07:44 |
jrosser | so we need some guidance/examples for different situations | 07:44 |
noonedeadpunk | Well, technically LXC can connect to OVS instead of LXB nicely, but it's complete another topic | 07:45 |
jrosser | and not just OVS tbh - you need OVN running there too | 07:45 |
jrosser | otherwise the neutron netowkr would not be actually wired to br-lbaas | 07:45 |
farbod | So yesterday i looked at the neutro documentation and OSA network architecture. I relized that there are two bridge networks for instances. br-vxlan and br-ex. as i understand the br-vxlan is the tunnel(overlay ) network. but what exactly this overlay network does? | 07:45 |
noonedeadpunk | ugh | 07:46 |
jrosser | farbod: in openstack there are two kinds of networks, ones which the openstack operator owns and are usually associated with something physical, like your internet connection or some company internal network | 07:47 |
jrosser | these are called provider networks | 07:47 |
jrosser | then there are neutron networks that users can create themselves as part of their project in openstack, these (generally but not always) are internal to the openstack deployment | 07:48 |
farbod | so you mean neutron uses br-vxlan for this kindof networking. I mean additional user defined networks, subnets etc? | 07:49 |
jrosser | the self-service networks that users create can be implemented using some range of underlying vlan id on an interface, or they can be done with vxlan or geneve overlays | 07:49 |
jrosser | this is why we have a br-vlxan, its a kind of placeholder interface for some IP address on the compute/network hosts to create endpoints for virtual networks built with vxlan/geneve | 07:50 |
jrosser | in a real deployment you can have an actual interface or bond for this | 07:50 |
farbod | as i understand br-vxlan is neutron network for additional defiend networks in cluster and br-ex is for external access to physical net? | 07:51 |
jrosser | thats right (basically) | 07:52 |
farbod | OK | 07:52 |
farbod | an another question | 07:52 |
jrosser | have you looked at things like vxlan before? | 07:52 |
farbod | not really | 07:53 |
jrosser | ok so br-vxlan is not actually a neutron network | 07:53 |
farbod | i think neutron uses br-vxlan for its networks | 07:53 |
jrosser | it is some interface (you can use a different one if you wish) where neutron creates the "tunnel endpoint" | 07:53 |
jrosser | the neutron network that a user creates is actually one of those tunnels, rather than br-vxlan itself | 07:54 |
jrosser | br-vxlan is the "transport" for all the different tunnels | 07:54 |
farbod | oh nice | 07:54 |
farbod | so what is the connection between neutron and br-ex? | 07:54 |
jrosser | generally that is where you as the operator of openstack connect your external physical networks | 07:55 |
jrosser | there can be as many as you need | 07:55 |
farbod | so how to create a network and assign IPs fro br-ex to instances? | 07:56 |
jrosser | do this is one of the design choices you have to make | 07:56 |
jrosser | in a multitenant openstack a user would be in a project, and in that project they could create their own neutron network with some vlan or overlay approach | 07:57 |
farbod | and another question. yesterday one of you guys provided me a config for br-ex with vlan and flat options: https://paste.opendev.org/show/b93fNbuLFMMR7jPkCDrH/ | 07:57 |
jrosser | then they would create a neutron router, and connect to to their network and the external network. the router would do NAT | 07:58 |
farbod | my question is that in vlan mode does openstack creates a VLAN Interface for each network and in flat it doesn't? | 07:58 |
farbod | jrosser: i get it | 07:59 |
jrosser | if you want traffic for an external IP to go directly to a VM you can then create a neutron floating IP taken from the range on br-ex | 07:59 |
jrosser | so thats a multitenant approch | 07:59 |
jrosser | however you can also allow VM to connect straight to external networks, if you want that | 07:59 |
jrosser | pros and cons everywhere | 07:59 |
jrosser | in the vlan mode you need to give neutron some physical interface and tell it a range of vlan-id that it is allowed to allocate to users | 08:00 |
jrosser | in the OSA reference design thats usually br-vlan | 08:00 |
jrosser | regarding flat networks, that is how you describe to neutron a physical interface with untagged traffic on it, so like a 1:1 mapping | 08:01 |
farbod | you mean br-vlan is for floating IPs? | 08:02 |
jrosser | floating ips are on an external network | 08:02 |
jrosser | usually OSA has br-vlan being dynamically allocated tenant/project networks | 08:03 |
farbod | OK | 08:03 |
farbod | Thanks | 08:03 |
jrosser | it's worth understanding the concepts in neutron a bit | 08:04 |
farbod | i tried to lean from documentation but not a much info | 08:04 |
jrosser | my experience is that flat networks are a lot of trouble | 08:04 |
jrosser | becasue you have to define them at deploy time and it all gets written into config files and services restarted etc etc | 08:05 |
jrosser | if you use a vlan type then you define the physical interface mapping once in the config | 08:06 |
farbod | but what if i have only one interface and public ips are on that interface | 08:06 |
jrosser | then tell neutron through the openstack cli which tags to use | 08:06 |
jrosser | that is what everyone says to start with | 08:07 |
jrosser | i think that when you say you have only one interface, you mean the whole server only has one interface | 08:08 |
farbod | yes | 08:09 |
jrosser | so you're having to use vlans anyway? | 08:11 |
admin1 | my lxcbr0 disappeared after reboot | 08:11 |
admin1 | never seen :) | 08:11 |
farbod | yes i have vlans but limited to only 5 vlans | 08:12 |
admin1 | you can use 1 for api, 2 for storage 3 for east-west and 4 for neutron provider network and still have 1 more left :D | 08:13 |
farbod | nice | 08:14 |
farbod | and yesterday i had a problem with image uploading | 08:14 |
farbod | how to configure the OSA to enabl image uploading? | 08:15 |
noonedeadpunk | farbod: you mean from web uri? | 08:20 |
farbod | yes | 08:20 |
farbod | also when i try to create instances from dashboard i get this error: https://paste.opendev.org/show/bzmQ5XhNeCQLVVdETvb7/ | 08:21 |
noonedeadpunk | Tbh I would suggest just upload image from local file for beginning. And get cluster working with "simple" setup. As I said, it needs to configure interoperable image import. Also not all clients/tools/services does support usage of import api and you need to understand how to configure glance for import to work. | 08:23 |
farbod | 👍️ | 08:23 |
noonedeadpunk | regarding error - you need to check details of volume then, to see why specifically it failed | 08:23 |
noonedeadpunk | like `openstack volume show 44795349-1d1a-49c9-85e9-a8a81cf31330` | 08:26 |
jrosser | farbod: what choices did you make for storage? | 08:26 |
farbod | i think the problem is that i even didn't deploy a storage :D | 08:26 |
farbod | is this configuration OK? : user config: https://paste.opendev.org/show/baowUwNLc0ZdcWPrfmPM/ first infra network node network interfaces: https://paste.opendev.org/show/bcQZPFykWq0Xge3OUwgr/ second node network interfaces which is storage and comute: https://paste.opendev.org/show/byY4qFhZEWlgV29yRtpm/ | 08:36 |
noonedeadpunk | correct me if I'm wrong, but I don't think that LVM can act as remote storage? I guess you'd need then to have cinder-volume running on each compute | 08:46 |
noonedeadpunk | or maybe you can.... | 08:47 |
* noonedeadpunk never used that | 08:48 | |
jrosser | i think thats possible but depends what you want to do | 08:50 |
jrosser | i saw some blog that ran cinder volume on each compute and made it so that it was "local" iscsi per compute | 08:51 |
jrosser | but it was complex to do | 08:51 |
admin1 | my lxcbr0 disappears on every reboot | 09:22 |
admin1 | running lxc-setup-hosts brings it back up | 09:22 |
admin1 | what could it be ? | 09:22 |
admin1 | something to do with netplan/ubuntu not playing nice with files inside /etc/network/interfaces.d/ | 09:26 |
noonedeadpunk | lxcbr0 is created with systemd-networkd. | 09:35 |
noonedeadpunk | in latest releases | 09:36 |
jrosser | did we change that? | 09:38 |
* jrosser just wondering if there is an upgrade gotcha there with some left over stuff | 09:38 | |
noonedeadpunk | I think we did some time ago | 09:41 |
noonedeadpunk | https://opendev.org/openstack/openstack-ansible-lxc_hosts/commit/3d8e3690ba620d1724129f8ed1a6a040c5ccdac9 | 09:43 |
noonedeadpunk | so it was for Zed | 09:43 |
noonedeadpunk | but we should have handled the upgrade | 09:44 |
noonedeadpunk | admin1: so I would check status of systemd-networkd after the reboot and if it's enabled | 09:45 |
admin1 | its enabled after reboot | 09:51 |
jrosser | but not working? | 09:53 |
admin1 | https://gist.githubusercontent.com/a1git/1fc8bd3cb5104c643c8d24cffdbb74fa/raw/13d41dc6f5dab1681d435ddb6f460d841083b972/gistfile1.txt -- | 09:55 |
admin1 | hmm.. trying to get to it | 09:55 |
admin1 | only log i could find -> Sep 07 09:20:57 c1 lxc-system-manage[4054]: /usr/local/bin/lxc-system-manage: line 81: /proc/sys/net/ipv6/conf/lxcbr0/accept_dad: No such file or directory | 09:57 |
admin1 | this cluster only has a single controller .. so cannot reboot it again ( in prod ) | 09:59 |
noonedeadpunk | so basically you;d need to `ip link set lxbr0 up` | 09:59 |
noonedeadpunk | as somehow it's not brought up on it's own | 10:00 |
frickler | ipv6 disabled globally? | 10:00 |
admin1 | it sees all except lxcbr0 after reboot .. https://gist.githubusercontent.com/a1git/4c18a5ee1b23e65ea9ba0226fa831e90/raw/b28281cb2b406a971bfca45f73bc77531f887646/gistfile1.txt | 10:06 |
admin1 | this one is on 25.2.0 | 10:06 |
jrosser | it's not to do with the OSA version really | 10:06 |
jrosser | there will be something in the host/environment config that prevents it coming up | 10:07 |
jrosser | or ordering/dependancy of some kind | 10:07 |
jrosser | i don't think we can fix anything without knowing exactly what is stopping it coming up | 10:07 |
noonedeadpunk | I actually never restarted prod controllers after uprgade to >=Zed | 10:09 |
noonedeadpunk | as we did it really recently | 10:09 |
admin1 | the only change from working - non working is some ubuntu update and it was rebooted | 10:09 |
admin1 | i am trying to find out what packages | 10:09 |
noonedeadpunk | I wonder if that could be absent IP on interface or smth like that | 10:10 |
noonedeadpunk | on lxcbr0 I mean | 10:11 |
admin1 | i will go through the playbooks of that tag to check what files it creates | 10:11 |
admin1 | and then match it | 10:11 |
noonedeadpunk | you should look in /etc/systemd/network | 10:12 |
noonedeadpunk | it's all there | 10:12 |
admin1 | that folder is blank :) | 10:14 |
noonedeadpunk | Ah, it's Yoga I guess... | 10:16 |
noonedeadpunk | then it's not systemd-networkd that manage the bridge yet | 10:17 |
noonedeadpunk | then it should be /etc/network/interfaces.d/lxc-net-bridge.cfg | 10:18 |
admin1 | right .. and i think some update causes it to ignore it now | 10:19 |
admin1 | as it was fine with reboots 2 weeks ago .. | 10:19 |
admin1 | i think i will move it up a tag to zed and give it a try :D | 10:21 |
noonedeadpunk | you just said it's production :D | 10:25 |
noonedeadpunk | you can also move a tag to antelope then - direct upgrades from Y to AA are supported just in case | 10:26 |
noonedeadpunk | Though, move it to HEAD of stable/2023.1 instead of 27.0.1 | 10:26 |
noonedeadpunk | We're about to release quite big bugfix release | 10:26 |
admin1 | i did one upgrade from 26 -> 27 , still trying to figure out why nova-console is broken | 10:27 |
admin1 | first it broke my custom domain mapping .. like id.domain.com instead of cloud.domain.com:5000 , and then i reverted that , still nova-console does not work | 10:27 |
admin1 | and terraform is also broken | 10:27 |
admin1 | 26 -> 27 => https://gist.githubusercontent.com/a1git/2efcfd956f342333070f04b7bc048f6f/raw/0bc97dbc4888fae7bd97505557f73a3eb2186480/gistfile1.txt | 10:28 |
noonedeadpunk | We indeed made quite big changes to haproxy setup in antelope | 10:28 |
admin1 | openstack cli does not fail on any command , horizon fails on nova-console ,, haproxy fails on custom domain .. terraform failure --still no clear idea why | 10:29 |
noonedeadpunk | so likely that for custom domain overrides should be adjusted. Also depends on where you've defined them, as now individual services are scoped not with haproxy group, but with specific service | 10:30 |
noonedeadpunk | so to have override for glance haproxy service it should be done not in group_vars/haproxy but in group_vars/glance_all | 10:31 |
admin1 | it was on haproxy_horizon_service_overrides: in user_variables | 10:31 |
noonedeadpunk | (if it's not user_variables) | 10:31 |
noonedeadpunk | have no idea about terraform though - it's even not opensource anymore :D | 10:32 |
admin1 | acl cloud_keystone hdr(host) -i id.domain.com ; use_backend keystone_service-back if cloud_keystone ; keystone_service_publicuri: https://id.domain.com | 10:32 |
admin1 | those were my overrides for keystone | 10:32 |
admin1 | and similar for the rest | 10:32 |
noonedeadpunk | but how/why that's horizon overrides... | 10:32 |
admin1 | using a wilcard for the domain ssl | 10:32 |
admin1 | haproxy_horizon_service_overrides: | 10:33 |
admin1 | haproxy_frontend_raw: | 10:33 |
noonedeadpunk | we also have haproxy maps support, that should make such changes way more trivial | 10:33 |
noonedeadpunk | ah | 10:33 |
admin1 | if you can point me to the right way for this to be done in 27, i can put it back | 10:33 |
noonedeadpunk | I think you might need to use base service isntead then | 10:33 |
noonedeadpunk | as we use a "special" service that listens on 80 and 443 | 10:34 |
noonedeadpunk | https://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/group_vars/haproxy/haproxy.yml#L71-L91 | 10:34 |
noonedeadpunk | and you can use `haproxy_base_service_overrides` to chime in there | 10:35 |
noonedeadpunk | so try replacing `haproxy_horizon_service_overrides` with `haproxy_base_service_overrides` and same content might even work | 10:35 |
admin1 | i will give it a try | 10:35 |
admin1 | do i still need to do horizon_service_overrides ? | 10:36 |
jrosser | admin1: there is a bunch of release notes for this :) | 10:36 |
jrosser | *hopefully | 10:36 |
noonedeadpunk | also, you can indeed convert all that to a map file. Here's some haproxy doc explaining this: https://www.haproxy.com/blog/introduction-to-haproxy-maps | 10:37 |
noonedeadpunk | so you should be able to populate `haproxy_map_entries` | 10:37 |
admin1 | perfect exercise to test how advaned our LLM AI gods are ( chatgpt ) :D | 10:38 |
admin1 | i will give it my output in haproxy.cfg and ask it to map it | 10:39 |
admin1 | though this, i can live without | 10:39 |
admin1 | why terraform broke is more interesting .. | 10:39 |
noonedeadpunk | That I have no idea about frankly speaking | 10:39 |
admin1 | i have to MITM my own ssl in the middle to see its raw api calls .. coz even with trace, i was not able to see it | 10:40 |
noonedeadpunk | And since they've changed license to BSL - I even don't care about that | 10:40 |
admin1 | bsl will affect service providers more .. users not so much | 10:42 |
admin1 | this was on hackernews today https://threadreaderapp.com/thread/1696521808143683812.html | 10:42 |
noonedeadpunk | well.. it's kind of - bugs won't be fixed until there will be a support ticket from valuable customer | 10:43 |
noonedeadpunk | So I guess - feel free to submit one?:) | 10:45 |
noonedeadpunk | that thread reminds me ones that protected centos stream, hiding sources for it, etc. | 10:47 |
noonedeadpunk | There're 31 occurance of word "support" in the thread. | 10:47 |
admin1 | i am not for or against it .. but just saying that if you have a customer that uses tf ( majority ) its best to upgrade in a test env and test it againt tf as well | 10:47 |
noonedeadpunk | Which makes me think that person who wrote it does not fully realize what OpenSource is and why it's valuable | 10:47 |
noonedeadpunk | And it's also not OpenSTack that is broken in this case, IMO. | 10:48 |
noonedeadpunk | So now it's kinda situation, that in order to keep really broken thing working, you need to keep upgrades and break what is fine | 10:49 |
noonedeadpunk | s/keep/stop/ | 10:49 |
noonedeadpunk | So imo, bsl is quite a deal for end users as well | 10:50 |
noonedeadpunk | as at the end users will require to have some feature that is not in tf and won't be in tf until hashi got a support contract to implement it for them | 10:51 |
noonedeadpunk | but it's just my opinion.... | 10:52 |
opendevreview | Merged openstack/openstack-ansible-os_placement master: Add online_data_migrations for placement https://review.opendev.org/c/openstack/openstack-ansible-os_placement/+/892159 | 11:16 |
opendevreview | Merged openstack/openstack-ansible-os_neutron master: Check length of network_mappings https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/893924 | 11:22 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_neutron stable/2023.1: Check length of network_mappings https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/893951 | 11:25 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_neutron stable/zed: Check length of network_mappings https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/893952 | 11:25 |
opendevreview | Merged openstack/openstack-ansible master: Apply deployment env vars during keystone main_pre https://review.opendev.org/c/openstack/openstack-ansible/+/894004 | 11:39 |
farbod | https://paste.opendev.org/show/bGCVrDXcZPEJK8YiaBKD/ | 12:08 |
farbod | hey guys | 12:08 |
farbod | whats the problem with this one? | 12:08 |
noonedeadpunk | has never seen that. is it run as root? | 12:12 |
noonedeadpunk | also what playbook are you running? | 12:12 |
farbod | 2023.1 | 12:12 |
farbod | yes its root | 12:12 |
noonedeadpunk | 2023.1 is not a playbook( | 12:12 |
noonedeadpunk | it's version :p | 12:13 |
farbod | oh sorry | 12:13 |
farbod | setup_hosts | 12:13 |
noonedeadpunk | if everything fine with... diskspace on infra1? | 12:14 |
farbod | yes | 12:15 |
noonedeadpunk | As I'mn not sure what could be the reason of `[Errno 13] Permission denied: b'/var/lib/lxc/infra1_glance_container-c1ec6e05/` | 12:15 |
noonedeadpunk | Like it's some system error rather then playbook or logic | 12:15 |
noonedeadpunk | what's the task name that fails? | 12:16 |
farbod | give me some mins | 12:16 |
admin1 | farbod, paste your netplan, ip link, ip -4 a and brctl show output | 12:41 |
jamesdenton | mornin' | 12:59 |
noonedeadpunk | o/ | 13:13 |
noonedeadpunk | NeilHanlon: hey! has smth happened to your infra again? | 13:13 |
noonedeadpunk | jamesdenton: there's another version of vpnaas templates fix available:) https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/893856 | 13:14 |
noonedeadpunk | jrosser: sooo... what should be do with octavia bridge now? Is it breaking CI for octavia? | 13:16 |
noonedeadpunk | just trying to understand if backport should be included in 27.1.0 or not | 13:16 |
NeilHanlon | noonedeadpunk: not that I know of! | 13:18 |
noonedeadpunk | I've jsut seen rocky failures like last weekend | 13:18 |
noonedeadpunk | https://zuul.opendev.org/t/openstack/build/015e754691b74e6fa7ef1c381f6f0923 | 13:18 |
noonedeadpunk | again with systemd | 13:19 |
NeilHanlon | hm | 13:19 |
noonedeadpunk | and here https://zuul.opendev.org/t/openstack/build/a7c9e54791cd4c9f8637b959062e3fe0 | 13:19 |
NeilHanlon | It's definitely possible that something happened, but I am not seeing any related events on our monitoring | 13:22 |
NeilHanlon | let me correlate this time. i did make a CDN change, but that should have been transparent. | 13:22 |
NeilHanlon | yea no those don't line up. it was 0300 UTC I made that change | 13:23 |
NeilHanlon | i'm checking w/ our releng team to see if they've seen anything. but i wonder if we caught a bad mirror in zuul | 13:23 |
NeilHanlon | i forget if we hardcode to dl.rockylinux.org | 13:24 |
jamesdenton | noonedeadpunk thanks, i had already abandoned the other one. Didn't see the typo :( | 13:26 |
jrosser | noonedeadpunk: i think perhaps we should ask jamesdenton what he thinks is a good idea for octavia | 13:27 |
jrosser | becasue there was brokennes for LXC before | 13:27 |
jamesdenton | noonedeadpunk the issue for Octavia is that the octavia lxc container needs to connect to the OVS bridge, which doesn't yet exist at the time of setup-hosts | 13:27 |
jrosser | and my change maybe just makes different problems, | 13:27 |
jrosser | so there is not really good value in including it in 27.1.0 | 13:27 |
jamesdenton | and what makes it even more of a challenge is that if octavia is not deployed on the same host has network bits, there's a chance ovs would never exist | 13:28 |
NeilHanlon | I want to also revisit asking infra if we have space to mirror rocky | 13:28 |
jamesdenton | I think this may be a situation where there's something different for CI and something else for production; IMO routed is the way. lbaas_mgmt is just a provider network that needs to be accessible by the control plane, but doesn't need L2 adjacency | 13:29 |
jamesdenton | and what further complicates the OVS side of it, is its not enough to just connect LXC -> OVS, you need to make sure the right OVS flow(s) get implemented to allow that connection to talk to VMs | 13:30 |
jrosser | becasue we dont make (guessing) neutron ports for the control plane hosts | 13:31 |
jamesdenton | more or less | 13:31 |
jrosser | wow what a mess | 13:31 |
jamesdenton | if you check out the devstack bits, they're creating a neutron port after the fact, and then using that resulting mac address for a veth or tap or dummy, can't recall exactly | 13:32 |
jamesdenton | which gets the job done for tempest | 13:32 |
jamesdenton | We've been doing routed for a while now in our deployments. There's one override needed IIRC to make it work, and the provider network gets set up early in the deployment process | 13:32 |
jrosser | do you need to do stuff to make the ovs flows be right? | 13:34 |
jamesdenton | not that i'm aware; the creation of the neutron 'port' allows the agents to setup the flows accordingly (i believe) - but i need to look closer at the devstack | 13:38 |
jrosser | i expect there is a similar situation for ironic | 13:40 |
jrosser | well not sure actually on that | 13:41 |
jamesdenton | in my experience, the conductor needs to hit ipmi and that is routable | 13:44 |
jamesdenton | but PXE is a different story and "depends" | 13:44 |
jrosser | and cloud-init needs to do it's thing as well somehow | 13:46 |
jrosser | ironic people were a little surprised we allow cleaning/provisioning/inspection networks to be the same | 13:48 |
jamesdenton | Well, without neutron integration i don't see how you don't do it that way | 13:49 |
johnsom | I agree, in production deployments, using a routing approach for the lb-mgmt-net is a good strategy | 14:11 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Bump ansible-core to 2.15.3 and ansible-lint https://review.opendev.org/c/openstack/openstack-ansible/+/892371 | 15:01 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_adjutant master: Fix linters and metadata https://review.opendev.org/c/openstack/openstack-ansible-os_adjutant/+/888469 | 15:01 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!