admin1 | \o | 10:27 |
---|---|---|
admin1 | looking for pointers to do this in the latest osa .. old method was => haproxy_horizon_service_overrides: haproxy_frontend_raw: - acl cloud_keystone hdr(host) -i id.domain.com .. - use_backend keystone_service-back if cloud_keystone | 10:27 |
noonedeadpunk | haproxy_horizon_service_overrides is still valid variable? | 10:29 |
noonedeadpunk | So not sure what you mean here | 10:30 |
noonedeadpunk | what has changed is that you'd neet to run os-horizon-install.yml --tags haproxy-service-config | 10:30 |
noonedeadpunk | instead of harpoxy-install.yml to propogate these settings | 10:30 |
admin1 | grep -ri haproxy_horizon_service_overrides has no hits on /etc/ansible or /opt/openstack-ansbilble / .. so | 10:32 |
admin1 | noonedeadpunk, sample -> https://pastebin.com/raw/rxPWAgGQ | 10:35 |
noonedeadpunk | well, it's oviously there: https://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/group_vars/horizon_all/haproxy_service.yml#L47 | 10:36 |
admin1 | now grep gets a hit .. it did not just a few mins ago :( | 10:38 |
noonedeadpunk | I didn't put anything to your env, I promise :D | 10:38 |
admin1 | :) | 10:39 |
noonedeadpunk | what changed is a scope of the var - it should not be in haproxy_all group, and you need to run service playbook instead of haproxy to apply changes | 10:39 |
admin1 | it used to be so easy to ssh to util containers before .. now ssh is removed and have to first go to controller and then shell .. | 10:43 |
noonedeadpunk | Well. I wasn't thinking it's a usecase tbh, as container IPs are kinda... were not suposed to be used like this... But what I think we can do - is create some kind of playbook that will add SSH to containers | 10:46 |
noonedeadpunk | (or role) | 10:46 |
admin1 | its a apt install openssh-server for now .. so not a biggie | 10:46 |
admin1 | but one extra step to do | 10:46 |
noonedeadpunk | Actually | 10:46 |
noonedeadpunk | We have a variable for that :) | 10:46 |
noonedeadpunk | `openstack_host_extra_distro_packages` | 10:47 |
noonedeadpunk | If you define it in group_vars/utility_all - you will get package installed only there | 10:47 |
admin1 | aah .. for me whould be curl wget and vim :) | 10:48 |
noonedeadpunk | But there's still question of key distribution | 10:48 |
noonedeadpunk | But also you can say that utility host is localhost and get rid of containers-containers for utility | 10:48 |
admin1 | noonedeadpunk, https://pastebin.com/raw/rxPWAgGQ -- this updated the id in keystone .. but i see no entries in haproxy | 10:53 |
jrosser | in wonder what happens here https://zuul.opendev.org/t/openstack/build/9d1b1dc7776b42378157a18969c7ffb2/log/logs/host/nova-compute.service.journal-00-02-15.log.txt#7446 | 11:04 |
jrosser | seems like it returns 200 from neutron https://zuul.opendev.org/t/openstack/build/9d1b1dc7776b42378157a18969c7ffb2/log/logs/openstack/aio1_neutron_server_container-92ca237a/neutron-server.service.journal-00-02-15.log.txt#17093 | 11:04 |
noonedeadpunk | 408 - huh | 11:13 |
noonedeadpunk | admin1: so, what playbooks you ran after changed it? | 11:14 |
noonedeadpunk | and in what file you've defined `haproxy_horizon_service_overrides`? | 11:14 |
admin1 | user_variables.yml | 11:27 |
admin1 | i ran keystone playbook, haproxy playbook and the horizon playbook | 11:27 |
noonedeadpunk | that should work for sure | 11:27 |
noonedeadpunk | hm, let me try this then... | 11:28 |
admin1 | the endpoint points to https://id.domain.com in endpoint list now ( for keystone ) .. but there is no entry in haproxy et | 11:28 |
admin1 | this will help move all my endpoints to https:// and not have it on any speciifc ports -- better for customers behind restrictive firewalls | 11:29 |
noonedeadpunk | admin1: oh, I know | 11:32 |
noonedeadpunk | I know | 11:33 |
noonedeadpunk | frontend for horizon is not in horizon vars.... | 11:33 |
admin1 | i ran haproxy playbook also .. | 11:34 |
noonedeadpunk | admin1: try replacing haproxy_horizon_service_overrides with haproxy_base_service_overrides | 11:34 |
admin1 | ok | 11:34 |
admin1 | one moment | 11:34 |
noonedeadpunk | and then yes - haproxy playbook | 11:34 |
admin1 | it added the acl , but i do not see any bind id.admin0.net | 11:38 |
admin1 | so it still redirects to horizon | 11:39 |
admin1 | hmm.. chekcing how it should be .. | 11:40 |
gokhan | hello folks, I have an offline environment. there is no internet, dns and firewall. all ip cidr and vlans are defined on switch. I created provider network and subnet without dns nameservers. after that I created a vm with provider network (it is defined on switch) but it doesn't get ip address. nova and neutron metadata services are working there is no error log. what can be the reason ? | 11:42 |
admin1 | check tcpdump . is it lb or ovs or ovn | 11:43 |
admin1 | are the dhcp agents plugged in correctly to the right vlan | 11:43 |
noonedeadpunk | are dhcp agents even enabled - that's the question I guess | 11:43 |
gokhan | it is ovs | 11:43 |
noonedeadpunk | as if network created with --no-dhcp you might need to use only config_drives | 11:43 |
noonedeadpunk | or have an l3 router attached to the network | 11:44 |
noonedeadpunk | (neutron router) | 11:44 |
gokhan | dhcp agents are enabled | 11:45 |
gokhan | there is no router | 11:45 |
gokhan | I am creating with provider network | 11:45 |
gokhan | from nodes I can't ping dhcp agents | 11:46 |
gokhan | can dhcp agent ping between themselves | 11:46 |
admin1 | noonedeadpunk, https://pastebin.com/raw/pD5AMxe6 cloud.domain.net and id.domain.net point o the same ip its listening to, the pem has wildcard for *.domain | 11:46 |
admin1 | gokhan, check where the dhcp agent is .. in which node .. is it seing the proper vlan tag .. is the tag enabled in the switch for tagged traffic ? | 11:47 |
admin1 | id.domain still goes to horizon .. i stopped and started haproxy | 11:48 |
noonedeadpunk | gokhan: I guess it's matter to ping dhcps from the VM | 11:50 |
noonedeadpunk | or VM from DHCP namespaces | 11:50 |
noonedeadpunk | ANd then you indeed can check for traffic on compute node on the bridge or tap interface | 11:51 |
gokhan | I will check it now | 11:53 |
noonedeadpunk | and after all you can try spawning VM with config drive and at least access it through console given you provide root password as extra user config | 11:54 |
noonedeadpunk | as then VM should get static network configuration IIRC. OR well. At least execute cloud-init | 11:55 |
admin1 | gokhan , u can also use cirros to check for tcpdump, setup ip manually and troubleshooting | 11:58 |
gokhan | admin1, yes now I created cirros image and I will set ip manually | 11:59 |
gokhan | admin1, is it a problem if network gateway is not defined on switch ? | 12:02 |
gokhan | I assigned ip manually to vm and ping from vm to one of the dhcp agent ip. I listened with tcpdump on tapinterface but there is no icmp trafic on tap interface. | 12:33 |
kleini | https://docs.openstack.org/nova/2023.1/admin/aggregates.html#tenant-isolation-with-placement does somebody use that? This worked very well for me for a single project. It broke now, since is put a second project onto the host aggregate. The issue is, that all other projects are also scheduled onto VMs of the host aggregate. I want to have the host aggregate exclusively for two projects. What do I oversee? | 13:38 |
noonedeadpunk | Oh, I know :) | 13:38 |
noonedeadpunk | kleini: so you need 2 things to work together - pre-filtering (with placement) and post-filtering with old time-proven AggregateMultiTenancyIsolation filter | 13:39 |
noonedeadpunk | And this is not designed to work together right now | 13:40 |
noonedeadpunk | But it works somehow :D | 13:40 |
noonedeadpunk | One way - this patch: https://review.opendev.org/c/openstack/nova/+/896512 But it used to break other use-cases | 13:41 |
noonedeadpunk | What I have right now, is weird hack around | 13:41 |
kleini | Why did it work for a single project and it broke by adding a second one? | 13:41 |
noonedeadpunk | Yeah, that's exactly the thing..... That format is incompatible between pre/post filtering | 13:42 |
kleini | Even the pre-filtering seemed to have worked. | 13:42 |
kleini | So the issue is that adding 1 and 2 to filter_tenant_id? | 13:42 |
noonedeadpunk | So you need to have following properties: `filter_tenant_id=tenant1,tenant2 filter_tenant_id_1=tenant1 filter_tenant_id_2=tenant2` | 13:43 |
noonedeadpunk | pre-filter does not split on comas | 13:43 |
noonedeadpunk | post filter does not understand suffixes | 13:43 |
kleini | and that's all? | 13:43 |
noonedeadpunk | so you configure post filter with filter_tenant_id (it's breaking pre-filter, but in a way that there's never a match with no biggie) | 13:44 |
noonedeadpunk | and pre filter with multiple suffixed options | 13:44 |
noonedeadpunk | yes | 13:44 |
noonedeadpunk | As pre-filter works on destination selection for those, who are in the list | 13:45 |
noonedeadpunk | and post filter rejects ones, who are not in the list, iirc | 13:46 |
noonedeadpunk | hope this helps | 13:46 |
admin1 | noonedeadpunk, when you have time, please try to test just 1 endpoint id.domain.com with keystone | 13:49 |
admin1 | via the overrides i am trying to do | 13:49 |
kleini | So, pre-filtering schedules VMs of these two projects to the host aggregate but the post-filtering failed to schedule other projects VMs non host aggregates, right? | 13:50 |
admin1 | gokhan, its a combo of security groups ( iptabels + ebtables ) and the vlan/vxlan/tunnel | 13:52 |
admin1 | if you crate 2 vms on the same network, can they ping each other | 13:52 |
admin1 | try to create 4 vms .. using the affinity .. one together and one anti affinity so that 2 are in the same node, and 2 are in diff diff nodes | 13:52 |
admin1 | and then check if they can ping each other in same node vs diff node | 13:53 |
admin1 | for the moment, try to allow all rules in icmp | 13:53 |
noonedeadpunk | kleini: well, it all depends on how you set these. as any combination is possible | 14:05 |
spatel | jrosser I have blog out rados + keystone integration - https://satishdotpatel.github.io/ceph-rados-gateway-integration-with-openstack-keystone/ | 14:06 |
kleini | noonedeadpunk, I will check source code of Nova and Placement and try to figure out, how to workaround the issue. | 14:08 |
noonedeadpunk | but `filter_tenant_id=tenant1,tenant2 filter_tenant_id_1=tenant1 filter_tenant_id_2=tenant2` should really work | 14:08 |
noonedeadpunk | At least it does for me | 14:08 |
noonedeadpunk | it's jsut that for filter_tenant_id you should have coma separated list, which contains no more then 6 projects, and then also each project individually through prefixed param | 14:09 |
noonedeadpunk | and it works on antelope for us | 14:09 |
noonedeadpunk | admin1: ok... So what specifically I should check? If config is applied? | 14:10 |
admin1 | well. curl id.domain.com should return keystone json | 14:10 |
admin1 | for me, it returns horizon | 14:10 |
noonedeadpunk | ok | 14:10 |
noonedeadpunk | ugh, it's really broad topic though | 14:11 |
kleini | noonedeadpunk, thank you very much. I expect it to work for us, too. Only currently it does not seem to work. Maybe we need some rest first. | 14:11 |
noonedeadpunk | and what behaviour do you see? That other projects can spawn instances on this aggregate, but tenants specified only spawn on these hosts? | 14:13 |
kleini | yes, exactly that | 14:13 |
admin1 | do you guys know a good blog on how policies work .. for example how do I deny admin the rights to delete a project for example .. while the delete could be done via a diff group called arbritary "super-admins" | 14:13 |
kleini | not wanted VMs of other VMs on the aggregate | 14:14 |
noonedeadpunk | ok, then it's post-filter | 14:14 |
kleini | of other tenats/projects | 14:14 |
kleini | and post-filter is in Nova AggregateMultiTenancyIsolationFilter, right? | 14:14 |
noonedeadpunk | Do you have `AggregateMultiTenancyIsolation` in the list of enabled filters to start with? | 14:14 |
admin1 | spatel, all working fine ? object storage ? | 14:14 |
spatel | yes all good | 14:15 |
noonedeadpunk | admin1: no, but thats not complicated. in theory | 14:15 |
kleini | noonedeadpunk, again, thank you very much. I need to have a rest. My sentences already get more and more mistakes. | 14:15 |
spatel | one problem having which is storage policy* stuff not able to define in horizon GUI | 14:15 |
spatel | I can create container from command line... but not from horizon because storage policy stuff which is new feature in openstack | 14:16 |
noonedeadpunk | admin1: google keystone policies, find page like https://docs.openstack.org/keystone/latest/configuration/policy.html - look for `delete_project`. You see it's `rule:admin_required` | 14:16 |
spatel | may be need to ask some one in horizon team | 14:16 |
kleini | and I have everything configured as described here: https://docs.openstack.org/nova/2023.1/admin/aggregates.html#tenant-isolation-with-placement. except filter_tenant_id=tenant1,tenant2 was missing | 14:16 |
noonedeadpunk | ok, yes, that is vital part for post-filter to work as it doesn't check for suffixes | 14:17 |
noonedeadpunk | admin1: and you can either define admin_required differently (it's also on that page) or replace with smth like `role:super_admin` | 14:17 |
opendevreview | Andrew Bonney proposed openstack/openstack-ansible master: WIP: [doc] Update distribution upgrades document for 2023.1/jammy https://review.opendev.org/c/openstack/openstack-ansible/+/906832 | 14:31 |
noonedeadpunk | andrewbonney: just in case, I had another round of improvements here: https://review.opendev.org/c/openstack/openstack-ansible/+/906750 | 14:36 |
noonedeadpunk | THe only thing I'm not convinced about - keystone. as it feels that by default running keystone on primary node will rotate fernet prematurely | 14:36 |
andrewbonney | I spotted :) I'll be a while before I have any other tweaks ready to merge so happy for that to go in first and I'll fix any merge conflicts | 14:36 |
noonedeadpunk | leaving all issued/cached tokens invalid | 14:36 |
noonedeadpunk | That we got hitted with rgw and users who tried to re-use tokens and cache them locally | 14:37 |
andrewbonney | Last time we did the upgrade I don't think limit worked for keystone, but I did want to re-check that this time around | 14:37 |
noonedeadpunk | not saying about need to wipe memached.... | 14:37 |
andrewbonney | Is there a reason we rotate keys every time the keystone role runs? If not it would be nice to be able to skip that | 14:37 |
noonedeadpunk | Yeah, limit probably does not... But maybe when "primary" container is empty - there's a way to say that it's another one which is primary | 14:38 |
noonedeadpunk | to use it as a sync source for the runtime | 14:38 |
noonedeadpunk | I *guess* it might be not each time | 14:38 |
noonedeadpunk | but since they're absent on "primary".... | 14:38 |
noonedeadpunk | haven;t look there so it's all just assumptions | 14:38 |
andrewbonney | Ok | 14:38 |
noonedeadpunk | but that hit us really badly.... | 14:39 |
noonedeadpunk | I guess this just results in no file and triggers fernet being re-issued: https://opendev.org/openstack/openstack-ansible-os_keystone/src/branch/master/tasks/keystone_credential_create.yml#L16-L19 | 14:40 |
andrewbonney | I'll definitely look at this more when I get to our keystone nodes next week | 14:40 |
noonedeadpunk | yeah: https://opendev.org/openstack/openstack-ansible-os_keystone/src/branch/master/tasks/keystone_credential_create.yml#L82-L83 | 14:41 |
andrewbonney | Did you hit any issues with repo servers? I've just done one of those and think the gluster role needs some tweaks for re-adding existing peers | 14:41 |
noonedeadpunk | but dunno how to workaround that, as it just depends on `_keystone_is_first_play_host` | 14:41 |
noonedeadpunk | um, no, but we don't have gluster - we just mount cephfs there | 14:42 |
andrewbonney | Ah ok | 14:42 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: [doc] Slighly simplify primary node redeployment https://review.opendev.org/c/openstack/openstack-ansible/+/906750 | 14:44 |
noonedeadpunk | admin1: sorry, I need to leave early today, so not able to check on your question now.... Or maybe when I will return back later... | 14:49 |
admin1 | sure | 14:49 |
noonedeadpunk | LIke grabbing couple of beers on the way back.... | 14:49 |
admin1 | :) happy friday | 14:53 |
noonedeadpunk | yeah, you too :) | 14:54 |
noonedeadpunk | will ping you if find anything | 14:54 |
spatel | What do you think about Micron 5400 SSD for ceph ? | 19:51 |
mgariepy | i mostly do sas ssd instead of sata ones if i don't have money for nvme ;) | 19:57 |
opendevreview | Merged openstack/openstack-ansible master: [doc] Reffer need of haproxy backend configuration in upgrade guide https://review.opendev.org/c/openstack/openstack-ansible/+/906360 | 23:20 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!