*** mtreinish_ is now known as mtreinish | 01:39 | |
*** mtreinish_ is now known as mtreinish | 02:41 | |
*** dasm|afk is now known as dasm|off | 04:15 | |
EugenMayer4 | sean-k-mooney that sounds like way too much effort for that. I'am fairly familiar with OPNsense but deploying it (manually, no tf suppport) just for that purpose seems over the top). I assumed that a DMZ should be something that neutron should be dealing with without bigger issues. | 07:04 |
---|---|---|
sean-k-mooney[m] | not really that is not something that they support | 07:06 |
sean-k-mooney[m] | your only way to do this in neutron is security groups | 07:06 |
sean-k-mooney[m] | or the firewall as a service project which as i said is nolonger activly developed | 07:06 |
sean-k-mooney[m] | so if security groups dont work for your usecase then you need to create something your self. i assume you have tried security groups and that was not enough | 07:08 |
sean-k-mooney[m] | security groups can match on ports/adreees adn on ingress vs egress | 07:09 |
sean-k-mooney[m] | but if you want to enforce that policy regradless of how the vms are booted on the dmz you cant reallly do that at the network level in neutron out of the box | 07:11 |
sean-k-mooney[m] | you could try https://docs.openstack.org/neutron/latest/admin/fwaas.html but as i said im not sure if that is even maintaiend anymore you shouold talk to the neutron team about it first because the project was being discontinued at one point | 07:16 |
opendevreview | melanie witt proposed openstack/nova master: Reproducer for bug 2003991 unshelving offloaded instance https://review.opendev.org/c/openstack/nova/+/872470 | 09:53 |
opendevreview | melanie witt proposed openstack/nova master: Enforce quota usage from placement when unshelving https://review.opendev.org/c/openstack/nova/+/872471 | 09:53 |
EugenMayer4 | sean-k-mooney[m] thank you for clarifying. The scope of security groups is hard to grasp. I tried secgroups on the VM, but i cannot limit the ingress to the intranet lan while opening the egress for internet via 0.0.0.0/0 | 12:06 |
EugenMayer4 | also, i assume that port (router port) based SG with OVS are entirely broken | 12:06 |
sean-k-mooney[m] | not broken security groups were only ever intended for vm ports | 12:07 |
sean-k-mooney[m] | the firewall as a service api was for router based firewalling. | 12:07 |
EugenMayer4 | i understand that (know) after digging int things. But that is not how they are exposed, described and "advertised" | 12:07 |
sean-k-mooney[m] | with that said you have differnt behavior with iptables and openflow in some cases | 12:08 |
sean-k-mooney[m] | the openflow firewall is more permissive interms fo what packets are allowed | 12:09 |
EugenMayer4 | yes, the iptables vs openflow stuff is entirely on me. We use OVS and i'am actually only okish with iptables in general, understand the concepts and know how to debug and isolate. Openflow is rather blowing my mind up | 12:09 |
EugenMayer4 | So that one is very much on me. I seem to struggle a lot of netns/openflow and all the tools one needs (and a lot of more complex concepts) | 12:09 |
sean-k-mooney[m] | i learnd openflow and ovs at the same time i was learning linux networking so i gnereally unserdstand the ovs side better | 12:10 |
sean-k-mooney[m] | if you were using the iptables firewalll driver you might be able to open egrees to the world and limit ingress | 12:11 |
sean-k-mooney[m] | the way the connection tracker stuff works is different between the two | 12:11 |
EugenMayer4 | well yes, i could limit the other side, but that seems overkill closing down the ingress on all the other targets. But surely doable somewhat | 12:12 |
sean-k-mooney[m] | https://docs.openstack.org/neutron/latest/admin/config-ovsfwdriver.html#differences-between-ovs-and-iptables-firewall-drivers | 12:13 |
EugenMayer4 | I'am used to limit what networks can do, since that really helps reducing the extra cost. I mean networks in openstack, beside the segmentation, has no real usage otherwise, if you cannot really control the flow, right. I mean you could also opt in for a huge 16 network or whatever and then use ingress for limitations (somewhat) | 12:13 |
sean-k-mooney[m] | neutron model is all based aroudn the ports not the networks | 12:14 |
sean-k-mooney[m] | and doign qos/firewallign on the ports | 12:15 |
EugenMayer4 | interesting, reading the OVS part it would mean, if i have an egress 0.0.0.0/0 and on 10.10.5.5/32 and i want to talk to 10.10.5/32 it might block it (but it does not make sense) | 12:15 |
sean-k-mooney[m] | partly because form most of its life it only had contol at the endpoints | 12:15 |
EugenMayer4 | yes, maybe i have to adopt that more. I usually tend to do more with networks | 12:15 |
sean-k-mooney[m] | i.e. linux bridge and ovs really could not influcance the core network at all | 12:15 |
EugenMayer4 | So you would, potentially, instead of controlling the egress of my DMZ network, rather control the ingress of the VMS in the the intranet network | 12:16 |
sean-k-mooney[m] | so there were those that wanted to do that and they created the firewall as a service and service function chaning projects to try and do that | 12:16 |
EugenMayer4 | that would be more the neutron way, right? | 12:16 |
sean-k-mooney[m] | the problem is they didnt keep maintianing them | 12:16 |
sean-k-mooney[m] | contolling ingress to the vms is the normally way yes | 12:17 |
sean-k-mooney[m] | security groups by defaull block all trafffic | 12:17 |
sean-k-mooney[m] | and you are expected to only open the port to the clinets that need acess | 12:17 |
EugenMayer4 | Yeah, i think the openstack hype was 2018 and has fallen quiet a bit. Also looked for some literature recently. People basically stopped publishing in 2018 | 12:17 |
sean-k-mooney[m] | i tought the hype died before that but ok :) | 12:18 |
EugenMayer4 | Well i'am not the one to define that, i came in here very much after that. I think we run openstack since jan 2021 | 12:19 |
sean-k-mooney[m] | the problem escpailly on the neutron side si none of the network vendors wanted to work and maintian the core | 12:19 |
sean-k-mooney[m] | they wanted to integrate there network stack so they could sell you that | 12:19 |
EugenMayer4 | well i think, the "vendors" wanted to make a difference in their services they offer, and to have that, they stopped sharing to be "different and better" then the competitor vendor | 12:19 |
EugenMayer4 | looking at what is really missing in openstack, and looking who is using openstack big scale, i see that most of the things i miss, have been solved on-platform by that vendor | 12:20 |
sean-k-mooney[m] | right but since neutron allows vendor extentions in the api cisco or juniper would just add a vendor exteion instead of implement a common shared api | 12:20 |
EugenMayer4 | yeah, i see that. | 12:20 |
sean-k-mooney[m] | there is less of a porblem for that in cinder | 12:21 |
sean-k-mooney[m] | they do not allow arbitray api extentions and use microverions like nova and keystone | 12:21 |
sean-k-mooney[m] | so there is driver to driver variance | 12:22 |
sean-k-mooney[m] | but the api is much more uniform | 12:22 |
EugenMayer4 | I would also say that in terms of API, openstack seems like designed by sysops people opting on microservices, while forgetting / not caring about all the dowsides and complications. There are so much lose ends, nobody is responsible for a task. Creating a backup with glance of a machine in nova ends up to be a task nobody has control over. One | 12:23 |
EugenMayer4 | starts it, the other "does something but has no clue who and what it belongs to" .. and that funny thing then, if anything errors, does not know how and what to recover or where to report the error to even | 12:23 |
EugenMayer4 | I think the hypversor/network/encapsulation part of Openstack is really solid, that's where a lot of the sysops people could do a lot of the good stuff. But the API and Software stack, the architecture, seems to have been neglected and as an "end user" that really bothers me | 12:24 |
EugenMayer4 | A war story, from time to time, no specific pattern, on of our VMs is just shutdown. In the VM audit log i see an anon user (so it will be something systemic) telling the VM to shutdown (cleanly). That's it. No trails no nothing .So something told something to do something somewhere - and i cannot even track the "next layer". Or i have a | 12:29 |
EugenMayer4 | hard-anti-affinity of a server-group. 3 VMS, 3 computes. first deployment, happy time, they land on 1 compute each. From time to time, openstack decides to reschedule them, and on VM lands on the same compute, so one compute has none. This happened about 3 times now and finally i will basically do the stupid thing and tie each of them to one | 12:29 |
EugenMayer4 | specific compute. What is responsible for the re-scheduling, IDK. | 12:29 |
EugenMayer4 | Enough of the ranting (if it was any). Thank you a lot for your insight (as always!) | 12:29 |
opendevreview | melanie witt proposed openstack/nova master: Reproducer for bug 2003991 unshelving offloaded instance https://review.opendev.org/c/openstack/nova/+/872470 | 13:19 |
opendevreview | melanie witt proposed openstack/nova master: Enforce quota usage from placement when unshelving https://review.opendev.org/c/openstack/nova/+/872471 | 13:19 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!