opendevreview | Merged openstack/ansible-role-qdrouterd master: Fix linters and metadata https://review.opendev.org/c/openstack/ansible-role-qdrouterd/+/888232 | 06:49 |
---|---|---|
opendevreview | Merged openstack/ansible-role-systemd_networkd master: Fix linters and metadata https://review.opendev.org/c/openstack/ansible-role-systemd_networkd/+/888226 | 08:26 |
opendevreview | Merged openstack/openstack-ansible-haproxy_server master: Add possibility to override haproxy_ssl_path https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/888498 | 08:34 |
opendevreview | Merged openstack/ansible-role-systemd_mount master: Fix linters and metadata https://review.opendev.org/c/openstack/ansible-role-systemd_mount/+/888225 | 08:36 |
opendevreview | Merged openstack/openstack-ansible-haproxy_server master: Fix generating certificate SANs https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/887572 | 08:41 |
opendevreview | Merged openstack/ansible-role-systemd_service master: Fix linters and metadata https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/888223 | 08:59 |
opendevreview | Merged openstack/openstack-ansible master: Gather facts before including common-playbooks https://review.opendev.org/c/openstack/openstack-ansible/+/888149 | 09:34 |
opendevreview | Merged openstack/openstack-ansible master: Remove Ubuntu 20.04 support https://review.opendev.org/c/openstack/openstack-ansible/+/886517 | 09:34 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_neutron master: Fix linters and metadata https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/888729 | 09:46 |
opendevreview | Merged openstack/openstack-ansible-os_glance master: Apply tags to systemd_service include https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/888458 | 09:51 |
opendevreview | Merged openstack/ansible-role-uwsgi master: Fix linters and metadata https://review.opendev.org/c/openstack/ansible-role-uwsgi/+/888224 | 11:40 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-haproxy_server master: Fix linters issue and metadata https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/888143 | 11:43 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-haproxy_server master: Do not use notify inside handlers https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/888762 | 11:44 |
Tadios | noonedeadpunk jamesdenton anskiy, After yesterday's suggestions and recommendations, I was able to fix my networking issues. and thank you very much. Now my instances are able to communicate with the outside, but I have a few more questions. | 12:03 |
Tadios | 1. I still can't create an admin network from horizon dashboard Admin > Network > Networks > Create Network provides "Danger: An error occured. Please try again later." and it is returning 500 Internal Server Error. from devtools. I dont know where to look the log files for this error? I tried /var/apache2/error.log but notting. | 12:03 |
admin1 | Tadios, try from the cli ? | 12:04 |
Tadios | admin1: i tried from cli and it works fine. | 12:04 |
noonedeadpunk | Tadios: it should be in journald for apache2 unit | 12:04 |
Tadios | okay let me check | 12:05 |
noonedeadpunk | like `journalctl -f -u apache2` | 12:05 |
Tadios | i should try this from inside horizon container right? | 12:06 |
noonedeadpunk | yup. you can do outside as well, but you'd need to select correct path with db | 12:07 |
noonedeadpunk | inside /var/log/journal | 12:07 |
Tadios | it says "Undefined provider network types are found: ['v', 'l', 'a', 'n', ',', 'l', 'o', 'c', 'a', 'l', ',', 'g', 'e', 'n', 'e', 'v', 'e'] | 12:09 |
Tadios | " when trying to create a network | 12:09 |
Tadios | https://paste.opendev.org/show/820717/ | 12:10 |
noonedeadpunk | ah | 12:11 |
Tadios | does it have something to do with this specification in my user_variables.yml "neutron_ml2_drivers_type: "vlan,local,geneve"" | 12:11 |
noonedeadpunk | Tadios: that looks like quite valid bug actually | 12:13 |
Tadios | oh really? why is it happening though? | 12:14 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_horizon master: Fix wrong neutron_ml2_drivers_type https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/888917 | 12:19 |
noonedeadpunk | Tadios: this should fix it ^ | 12:19 |
noonedeadpunk | but yes, given that you've defined neutron_ml2_drivers_type to their defaults, removing this variable and re-running os-horizon-install.yml should fix the issue | 12:20 |
Tadios | amazing! let me do that. | 12:21 |
noonedeadpunk | (you should be able to add --tags horizon-config to reduce time of running the playbook) | 12:23 |
Tadios | oh really, that would be handy. great now my horizon problem is also solved, i can create admin networks from the web interface. | 12:28 |
Tadios | and my second question was my vms can't still ping the gateway and i don't know what fixed the issue but it is working now, could it be the horizon fix or something else? | 12:33 |
noonedeadpunk | nah, it's not related to horizon for sure | 12:48 |
Tadios | oh my bad, it was Security Groups. and last question | 12:55 |
Tadios | Do we need to specify the container_type: "veth" in the provider_network: section of the openstack_user_config or is it optional? It is listed as required in the documentation. Also, what about container_interface? I asked this because I don't see these options on the configuration jamesdenton shared https://paste.opendev.org/show/bLkYnCApAH4vXAULykQk/ | 12:55 |
noonedeadpunk | I'm not 100% sure, but I'd say yes | 13:00 |
noonedeadpunk | to be frank - I've never experimented enough with that | 13:00 |
noonedeadpunk | as "it works"™ | 13:01 |
noonedeadpunk | Except, used `container_type: phys` to pass interface inside container | 13:02 |
Tadios | noonedeadpunk: okay great, thank you for your time, as always. | 13:03 |
noonedeadpunk | https://docs.openstack.org/openstack-ansible/latest/reference/configuration/extra-networks.html#using-sr-iov-interfaces-in-containers | 13:03 |
NeilHanlon | would guess veth is required for plumbing the pseudowires | 13:03 |
noonedeadpunk | I think that `container_type` would be passed to lxc config and then it's up to lxc defaults | 13:03 |
Tadios | NeilHanlon: okay good to know. | 13:04 |
noonedeadpunk | `container_interface` is needed only for groups that have containers in fact. Like it is needed for neutron-server, but it is not for nova-compute, for instance | 13:04 |
Tadios | oh okay. | 13:05 |
NeilHanlon | you can check the lcx docs for what available options are | 13:08 |
NeilHanlon | I've only ever used veth, phys, and Macvlan , but others are supported | 13:08 |
noonedeadpunk | ah, yes, macvlan was used as an option for octavia for some users as well | 13:09 |
noonedeadpunk | hamburgler: I can't recall if that was you who asked about 27.0.1 or not, but - it just went live | 13:10 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_adjutant master: Fix linters and metadata https://review.opendev.org/c/openstack/openstack-ansible-os_adjutant/+/888469 | 13:11 |
Tadios | so let's say i have three nodes node1,2 and 3 if use all three of the nodes to run my control plane services and my compute service, is it a going to be an okay high available system. or there is a problem with this design? | 13:25 |
admin1 | depends | 13:25 |
admin1 | test or production ? | 13:25 |
NeilHanlon | admin1++ | 13:25 |
Tadios | let's say company internal production | 13:26 |
admin1 | if production , then no .. | 13:26 |
NeilHanlon | Tadios: at the end of the day you're going to need to deal with resource contention between the things running ON your cloud and the things RUNNING your cloud. One has to have a higher priority, at the end of the day.. and hyper-convergence of everything as you describe is... an advanced topic | 13:26 |
admin1 | one acceptable way is to only run ssh in your server, then create the necessary bridges for traffic and then create 2x virtual machines, one for controller and one for hypervisor | 13:27 |
admin1 | then you can use them | 13:27 |
mgariepy | anyone see weird network traffic, like public network leaking to mgmt network ? | 13:27 |
admin1 | mgariepy, it depends on the vlan and routing | 13:28 |
mgariepy | i see arp req/res for api ip (which is on vlan XXX) passing on mgmt vlan (whcih is vlan YYY) no routing between, and the traffic is on the same L2, if i force ping -I vlanYYY api_ip | 13:30 |
mgariepy | only for IP/vlan that are on the controller | 13:30 |
mgariepy | admin1, not l3 in this case. | 13:30 |
NeilHanlon | my guess is that you're leaking routes in the default table between those two vlans on the controller, mgariepy | 13:35 |
NeilHanlon | fib routes, i mean | 13:36 |
mgariepy | is it leaked via something like rp_filter = 0 ? | 13:36 |
NeilHanlon | how are vlanX/vlanY setup? single interface w/ 802.1q on top? | 13:41 |
NeilHanlon | basically it seems like vlanX and vlanY share a bridge, and your host is flooding traffic between them, acting like a router and proxying arp requests | 13:41 |
mgariepy | the 2 are on ther controllers. bond > vlanX and bond > vlany > bridge | 13:41 |
NeilHanlon | oh, hm | 13:42 |
mgariepy | ip in question are on vlanX and bridge for the other network. | 13:42 |
mgariepy | leak is vlanx ip passing for some reason on vlanY. | 13:42 |
mgariepy | great for performance.. but.. meh haha | 13:43 |
NeilHanlon | what does `bridge vlan show` show, out of curiosity? | 13:44 |
mgariepy | 1 PVID Egress Untagged | 13:45 |
mgariepy | all of them | 13:45 |
hamburgler | noonedeadpunk: yes was me :) thanks very much! | 13:48 |
NeilHanlon | off topic: https://youtu.be/uq6BJCakbtA | 13:51 |
NeilHanlon | mgariepy: let me poke around in my lab and see what we can do | 13:54 |
NeilHanlon | s/we/I/ | 13:54 |
mgariepy | great thanks :D | 13:55 |
mgariepy | i think it's because of the lxc iptables rules. | 14:13 |
Tadios | NeilHanlon admin1 : okay so, here is the case, we have about 4 dell poweredge 730 servers at the office and i'm tasked with deploying a private cloud on them for internal services. and i am confused on which way would it be a good way to setup openstack to utilize the hardware resource and openstack services | 14:23 |
alizer | Hi, We are trying to deploy openstack-ansible, but I'm getting stuck on the task "os_keystone : Wait for service to be up" when running the setup-openstack.yml playbook. | 15:07 |
alizer | It tries to connect over http, but haproxy is setup with SSL for port 5000, which means it fails. I've made sure to install the CA on the deploy host and get no SSL errors when using curl towards the same url (with https instead of http). | 15:07 |
alizer | I've also set "openstack_service_publicuri_proto: https" in user_variables.yml. | 15:07 |
alizer | What else can I do to ensure that it uses SSL? | 15:07 |
admin1 | Tadios all servers have equal resources ? | 15:10 |
admin1 | server1 => ssh , lxd .. create a deploy container .. install kvm and create a virtal controller, add some space and enable nfs for cinder/ceph , use the 3 as computes | 15:11 |
admin1 | alizer, can u paste the user_variable configs ? | 15:11 |
admin1 | virtal -> virtual | 15:13 |
alizer | user_variables.yml: https://paste.openstack.org/show/bCs0exmz905QPqYMzXMp/ | 15:15 |
anskiy | alizer: that variable is for external endpoint, for internal you set `openstack_service_internaluri_proto: https` | 15:16 |
Tadios | admin1 for the most part yes, and they are also beefy | 15:17 |
anskiy | alizer: here is the task: https://opendev.org/openstack/openstack-ansible-os_keystone/src/branch/master/tasks/keystone_service_bootstrap.yml#L18, and here is how this variable is defined: https://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/group_vars/all/keystone.yml#L35 | 15:19 |
spatel | I have created kolla multi-node lab using LXD and now going to try openstack-ansible lab - https://satishdotpatel.github.io/build-multinode-kolla-lab-using-lxd/ | 15:28 |
spatel | Its super easy to use LXD for this kind of lab and give you production feels with isolated components | 15:29 |
alizer | Thanks anskiy, I totally missed that part of it. Seems it still fails as it still tries to check over http, but now it atleast checks over https also. The change also affected the pip part of the playbook as it tried to connect to the repo over SSL, which was not setup by the playbook. I edited the haproxy configuration to use SSL for the repo and could continue. Here is the output: | 15:34 |
alizer | https://paste.oderland.com/?55edc8b014d254d3#2VNs3tdDQzMF3TxUMQmjPV59cxtPf6pndmN6BLkwuEsG | 15:34 |
anskiy | alizer: ah, my bad. It seems that it fails on checking `keystone_service_adminuri`, so you need `openstack_service_adminuri_proto: https` too :) | 15:40 |
jrosser | alizer: anskiy this looks to test the internal endpoint https://opendev.org/openstack/openstack-ansible-os_keystone/src/branch/master/tasks/keystone_service_bootstrap.yml#L18 | 15:41 |
jrosser | but the error in the paste is for failing to connect to `http://openstack-poc.geic.se:5000` | 15:41 |
jrosser | which feels like an external endpoint URL | 15:41 |
anskiy | yeah, the proper place is here now: https://opendev.org/openstack/openstack-ansible-os_keystone/src/branch/master/tasks/main.yml#L216 | 15:42 |
anskiy | these two resources are called `Wait for services to be up` and `Wait for services to be up` (note the "s" on the end of the second) | 15:43 |
jrosser | right - this still feels like internal/external endpoint confusion in openstack_user_config.yml | 15:43 |
alizer | We are currently using the same internal and external endpoint (this is a POC). It sounds as you think that is a bad idea for a production install. I'm guessing that the external endpoint only needs to expose a more limited number of API methods compared to the internal one. | 15:44 |
jrosser | oh if by "the same" you mean the same IP then you can't do that | 15:44 |
jrosser | the external endpoint is the one your users use | 15:45 |
jrosser | the internal one is used by the internal components of the cloud | 15:46 |
jrosser | if they end up on the same IP then the deployment is broken, as you can't bind the same ip/port twice | 15:47 |
alizer | yeah, internal_lb_vip_address and external_lb_vip_address is currently both set to the same domain (pointed to the same IP). I've not yet run into something failing due that that, but we can certainly change that to be 2 different IPs and domains. I'm guessing that the previous playbooks needs to be run again also after that change. | 15:51 |
alizer | or, I guess you could say that my initial problem might have been related to this ;) | 15:51 |
jrosser | the internal endpoint should be an IP chosen from the mgmt network | 15:55 |
jrosser | and you should make sure that is incuded in 'used_ips' so that it is not accidentally allocated to something else by the ansible inventory | 15:56 |
jrosser | there are good examples of all this in the etc/ directory of the openstack-ansible repo | 15:57 |
opendevreview | Merged openstack/openstack-ansible-lxc_hosts master: Cleanup old OS support https://review.opendev.org/c/openstack/openstack-ansible-lxc_hosts/+/886597 | 18:09 |
opendevreview | Merged openstack/openstack-ansible-plugins master: Fix linters and metadata https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/888684 | 18:12 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!