farbod | Hi, i get this error for mount on infra repo: https://paste.opendev.org/show/bWzrGQqEnKzT3KFDRi9Y/ | 06:38 |
---|---|---|
farbod | i am deploying on cloud vms and networking seems ok. I tried multiple times and get the same error | 06:38 |
farbod | and here are the logs: https://paste.opendev.org/show/bpSMO7sV9Ypid97w61DJ/ | 06:56 |
farbod | any idea? | 06:56 |
farbod | here are more logs for var/www : https://paste.opendev.org/show/bwKeDT1hgDzVqHkWdO5v/ | 07:51 |
noonedeadpunk | hey | 08:27 |
noonedeadpunk | farbod: looks like gluster cluster is not really healthy based on the output | 08:28 |
farbod | i checked the logs but didn't found something useful. Any suggestions? Also i tried reinstalling and deleting lxc containers but it didn't work | 08:31 |
noonedeadpunk | hm, I can recall some patch to docs regarding gluster, but can't find it now | 08:31 |
noonedeadpunk | farbod: you deleted containers with removing all data? | 08:32 |
farbod | yes i did | 08:33 |
noonedeadpunk | ugh, I don't have gluster anywhere on my deployments to check some commands for debug.... | 08:33 |
noonedeadpunk | and you tried to destroy repo_cotnainers? | 08:35 |
farbod | yes | 08:35 |
noonedeadpunk | through lxc-containers-destroy playbook? Just verifying:) | 08:36 |
jrosser | i think that this is going to be troublesome | 08:36 |
farbod | noonedeadpunk: yes | 08:37 |
noonedeadpunk | (I think it should try to remove also all bind-mounted paths) | 08:37 |
farbod | how? | 08:37 |
farbod | Want me to provide configs? | 08:38 |
jrosser | farbod: this is not about the config | 08:38 |
jrosser | it is more that the deployment really expects the environment to be properly clean when you run the playbooks | 08:38 |
jrosser | you might still have /openstack/glusterd/<stuff> left from before, which will certainly make trouble for you if you are trying to do a completely new deployment on the same host | 08:39 |
noonedeadpunk | yeah, but that should be dropped with lxc-containers-destroy... | 08:40 |
noonedeadpunk | (at least I'd expect it to be) | 08:40 |
jrosser | even the data on the host /openstack/ directory? | 08:40 |
noonedeadpunk | I think if you're answering YES on questions | 08:40 |
noonedeadpunk | https://opendev.org/openstack/openstack-ansible/src/branch/master/playbooks/containers-lxc-destroy.yml#L66-L74 | 08:41 |
jrosser | looks like that would not cover `pool/openstack on /var/lib/glusterd type zfs (rw,xattr,posixacl)` | 08:41 |
noonedeadpunk | oh well | 08:42 |
noonedeadpunk | https://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/group_vars/repo_all.yml#L22 | 08:42 |
noonedeadpunk | it's not covering indeed. | 08:42 |
noonedeadpunk | probably it should? | 08:42 |
jrosser | probably..... | 08:43 |
noonedeadpunk | farbod: yeah, so try to drop containers, rm -rf /openstack/glusterd and create them again? | 08:44 |
farbod | خن | 08:45 |
farbod | ok | 08:45 |
noonedeadpunk | or at least worth documenting that | 08:45 |
noonedeadpunk | really wonder what the expected behaviour should be in this case | 08:46 |
noonedeadpunk | Like - gluster is kinda container data | 08:46 |
jrosser | imho this is going to then fail during re-deployment of the repo container in a HA deployment | 08:51 |
jrosser | as the gluster UUID of the node will change and need to be removed from all the other cluster members before continuing | 08:51 |
jrosser | i think that andrewbonney spent quite some time here looking to fix this case when we did our Focal->Jammy upgrades which amount to the same thing (loss of gluster data when re-pxe the hosts) | 08:53 |
noonedeadpunk | But I guess you then just destroy containers without removing a data? | 09:03 |
farbod | I encountered another problem: https://paste.opendev.org/show/bPEppMD7tkGs8iQGLuZP/ | 09:09 |
farbod | what should i check? | 09:09 |
noonedeadpunk | farbod: are you sure everything is fine with networking? | 09:10 |
noonedeadpunk | I guess you really should check on gluster state and why it's not oprational | 09:10 |
farbod | i have ping from deployment host to infras and containers. | 09:11 |
noonedeadpunk | what's `gluster volume status`, `gluster peer status`, `gluster pool list`? | 09:12 |
noonedeadpunk | farbod: well, ping doesn't mean you're not filtering 24007/24008 TCP/UDP ports | 09:13 |
noonedeadpunk | *doesn't show | 09:13 |
farbod | https://paste.opendev.org/show/bXTbukfbMi9CysLis5bv/ | 09:14 |
farbod | ``gluster volume status` doesn't execute | 09:15 |
farbod | root@infra1-repo-container-ddba2e05:~# gluster volume status | 09:16 |
farbod | Staging failed on infra2-repo-container-fc3b079d. Please check log file for details. | 09:16 |
farbod | Staging failed on infra3-repo-container-d9264ea4. Please check log file for details. | 09:16 |
noonedeadpunk | I really have quite limited experience with gluster as we're jsut using cephfs instead of it | 09:24 |
farbod | i am using ceph as my backend storage. is there a problem? | 09:24 |
noonedeadpunk | no, it's not related | 09:25 |
noonedeadpunk | it more that - openstack-ansible installs glusterfs by default, but it can be pretty much any shared FS instead | 09:25 |
noonedeadpunk | like NFS or CephFS | 09:26 |
noonedeadpunk | and you can disable installation of glusterfs in favor of some different fs | 09:26 |
noonedeadpunk | (which we did as we had cephfs anyway) | 09:26 |
noonedeadpunk | so, and what's in logs? | 09:27 |
farbod | which logs? glusterd? | 09:27 |
farbod | https://paste.opendev.org/show/bjJaEC9BMKm8j0nqkvhZ/ | 09:28 |
noonedeadpunk | so... and this time you dropped all 3 repo containers at the same time and wiped /openstack/glusterd on ech controller? | 09:31 |
farbod | yes | 09:31 |
noonedeadpunk | huh | 09:32 |
noonedeadpunk | it's weird kinda, as apparently it expects some metadata that's not there..... | 09:33 |
noonedeadpunk | I guess I'd need to try to reproduce flushing repo container to help you out... | 09:33 |
jrosser | isnt that missing data becasue /openstack/glusterd got wiped? | 09:56 |
jrosser | and thats pretty much the expected behaviour when re-creating repo containers in this situation, i.e re-initialise everything in /var/lib/glusterd | 09:57 |
jrosser | farbod: it looks like you have perhaps trouble with gluster locks? https://docs.gluster.org/en/main/Troubleshooting/troubleshooting-filelocks/ | 10:08 |
noonedeadpunk | jrosser: but I thought that re-running role should get gluster configured fully? | 10:20 |
jrosser | i can only guess that there is some left over state somewhere | 10:20 |
jrosser | the playbooks deploy from clean host -> working host, and as far as i can see farbod is trying to deploy onto a partially cleaned / re-cycled host | 10:21 |
jrosser | as i say andrewbonney already put a large effort into trying to automate the case where you totally redeploy one host, and it is not really possible | 10:22 |
jrosser | manual intervention is required | 10:22 |
farbod | jrosser: what should i do for locks? | 10:25 |
jrosser | i can only suggest using the glusterfs documentation and following their debug steps | 10:25 |
jrosser | the openstack-ansible playbooks are not able to fix some partially broken gluster setup, they just deploy it from a known clean state | 10:26 |
noonedeadpunk | jrosser: yeah, but we should be able to clean up state for all hosts with looasing data relatively easily? | 10:29 |
noonedeadpunk | as I guess that's what intention here | 10:29 |
jrosser | yes that should be possible | 10:30 |
jrosser | i can try an infra AIO | 10:30 |
jrosser | farbod: what operating system do you use? | 10:31 |
farbod | ubuntu 22.04 | 10:31 |
jrosser | farbod: it seems to be OK in my test here https://paste.opendev.org/show/brhn8zbVktnw6npEil3Y/ | 11:29 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-os_neutron stable/2023.2: Restart OVN on certificate changes https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/914013 | 11:36 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-os_neutron stable/2023.1: Restart OVN on certificate changes https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/914014 | 11:37 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-os_horizon stable/2023.2: Do not change mode of files recursively https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/914015 | 11:38 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-os_horizon stable/2023.1: Do not change mode of files recursively https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/914016 | 11:38 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-os_horizon stable/zed: Do not change mode of files recursively https://review.opendev.org/c/openstack/openstack-ansible-os_horizon/+/914017 | 11:39 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder master: Add service policies defenition https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/914086 | 13:08 |
noonedeadpunk | That is going to be quite big topic I assume.... | 13:59 |
noonedeadpunk | I need some help deciding on logic of how policies should be defined actually.... | 14:25 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-plugins master: Leave only unique policies for __mq_policies https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/914092 | 14:26 |
noonedeadpunk | Like - I'm not sure if service policies should override or extend default policies... | 14:27 |
noonedeadpunk | and where to define them. as now state feels very confusing | 14:27 |
noonedeadpunk | so we have `oslomsg_rpc_policies` defined in group_vars: https://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/group_vars/all/oslo-messaging.yml#L21 | 14:27 |
noonedeadpunk | And then we kinda merge things here: https://opendev.org/openstack/openstack-ansible-plugins/src/branch/master/roles/mq_setup/tasks/main.yml#L21 | 14:28 |
noonedeadpunk | but we kinda don't have _oslomsg_rpc_policies defined anywhere now, and it was assumed that we pass it during role include | 14:29 |
noonedeadpunk | and basically the question: https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/914086 | 14:30 |
noonedeadpunk | so I proposed doing `_oslomsg_rpc_policies: "{{ cinder_oslomsg_rpc_policies }}"` | 14:30 |
noonedeadpunk | but thinking about it - it's kinda obscure, as cinder_oslomsg_rpc_policies is an empty list, but does not mean vhost won't have any policies | 14:31 |
noonedeadpunk | as it will have - defined defaults of oslomsg_rpc_policies | 14:31 |
noonedeadpunk | So thinking about it I was guessing, if we should set `cinder_oslomsg_rpc_policies: "{{ oslomsg_rpc_policies }}"` by default... | 14:32 |
noonedeadpunk | but then we should name somehow different or remove list merging from the role, and both are not perfect | 14:32 |
noonedeadpunk | or leave it this obscure way, and just promote as ability to merge/partially override defaults, which is quite neat... | 14:33 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder master: Add variable to globally control notifications enablement https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/914100 | 15:18 |
ThiagoCMC | jrosser, just a quick update... I managed to deploy OSA AIO (2023.2) with Ceph (stable-8) last week but, IBM changed it again, removing a lot of other things from `ceph-ansible`, so, it's failing again lol | 15:56 |
ThiagoCMC | Have you tried it again? Or busy with other stuff...? | 15:57 |
jrosser | ThiagoCMC: well i did say before that the thing to do was concentrate on deploying Reef with stable-7.0 and find the vars needed for that | 15:57 |
jrosser | there is no point using stable-8.0 for anything because it is not a released version yet and will keep changing | 15:58 |
ThiagoCMC | Sure, I did that too, it works if OSA does not PIN ceph_community_pin.pref, ceph_community_pin.pref, and ceph_client_pin.pref, since it needs to come from Ubuntu's UCA. That's why I went to stable-8 now... I'm aiming the next releases of all pieces anyway... It's okay! I'll keep trying. | 16:01 |
jrosser | ThiagoCMC: but what do you want for the next OSA release? | 16:02 |
jrosser | it would be really great if you were able to make that deploy reef properly..... | 16:03 |
jrosser | feels like 99% of the understanding is ready | 16:03 |
jrosser | if you make the changes to the OSA vars to default to reef, and to move the pins for the repos from quincy to reef then we can have that in the next release of OSA | 16:05 |
noonedeadpunk | ++ | 16:08 |
noonedeadpunk | pinning doesn't sound like real blocker - we should be able to patch that nicely | 16:09 |
jrosser | indeed it should be easy | 16:09 |
jrosser | this is kind of nice low hanging fruit patch, if you already worked out something thats good | 16:09 |
jrosser | ceph-ansible stable8.0 is kind of a distraction to all this really | 16:11 |
ThiagoCMC | I hope to deploy OpenStack Caracal and Ceph Reef with next OSA and Ceph Ansible (hopefuly with stable-8.0) on Ubuntu 24.04. | 16:11 |
ThiagoCMC | However, if you folks think it's best to stick with Ceph Ansible stable-7.0, I can focus on it a bit more, and make a quick step-by-step guide. | 16:13 |
jrosser | ceph-ansible stable-8.0 is not released and will continue to change | 16:14 |
jrosser | if you're going to spend time on it please submit patches to OSA to switch from quincy to reef using the stable-7.0 branch | 16:15 |
jrosser | you know also that OSA C wont be supporting/tested at all on 24.04 | 16:16 |
ThiagoCMC | Okay. BTW, I joined Ceph Slack too, I'll keep in touch with IBM folks, in case we need more help. | 16:16 |
ThiagoCMC | Hmmm... Do you know which limitations are likely to exist on OSA C with 24.04? | 16:17 |
jrosser | thats a bit chicken/egg situation until we have a CI image for it | 16:17 |
ThiagoCMC | Got it =P | 16:17 |
jrosser | and also none of the actual openstack projects (nova / cinder etc) have done any testing on 24.04, and those release for Caracal next week | 16:19 |
ThiagoCMC | Right, but Ubuntu 24.04 plans to include Caracal on it... | 16:21 |
ThiagoCMC | https://discourse.ubuntu.com/t/noble-numbat-release-notes/39890 | 16:21 |
ThiagoCMC | Perhaps even Ceph 19! lol | 16:21 |
jrosser | Any they will provide you support on that of course | 16:22 |
ThiagoCMC | For 5 years =P | 16:23 |
jrosser | I'm just saying that Canonical are prepared to provide packages for Caracal on 24.04, and provide you support on an openstack/python version that the upstream openstack does not support | 16:23 |
ThiagoCMC | Good point... | 16:23 |
jrosser | so if you deploy OSA Caracal on 24.04, find it broken due to python 3.12, and ask nova team for help, you probably wont get it | 16:24 |
ThiagoCMC | Thank you for pointing this out! | 16:24 |
jrosser | here is where that supported version stuff gets defined https://governance.openstack.org/tc/reference/runtimes/2024.1.html | 16:25 |
ThiagoCMC | Cool! Well, Python 3.11 is still available on 24.04 (dev) | 16:27 |
noonedeadpunk | ThiagoCMC: available or default? | 16:40 |
ThiagoCMC | `/usr/bin/python3: symbolic link to python3.12` =P | 16:42 |
noonedeadpunk | Well, I've heard that py3.12 breaking openstack quite heavily in multple askects | 16:42 |
noonedeadpunk | there was eventlet topic as one example | 16:43 |
ThiagoCMC | Yeah, I wasn't thinking about this until today, TBH. | 16:43 |
noonedeadpunk | and then quite some things related to setuptools iiirc | 16:44 |
noonedeadpunk | and then stop providing uwsgi scripts due to that, which was discussed to track as a community goal | 16:44 |
noonedeadpunk | I'm not really sue how Canonical is handling that actually | 16:45 |
ThiagoCMC | It seems wise to build a new Cloud with 22.04/Bobcat/Reef instead of going crazy with 24.04 at this time. Perhaps Caracal and Ceph 19 will be available in UCA for 22.04, then there'll be more time to figure things out with 24.04, ceph ansible, etc... | 16:46 |
noonedeadpunk | `This is not mandatory testing in the 2024.2 cycle, and there is no guarantee that the OpenStack 2024.2 release will support Python 3.12.` | 16:46 |
noonedeadpunk | https://governance.openstack.org/tc/reference/runtimes/2024.2.html#python | 16:46 |
noonedeadpunk | So actually, OpenStack might not be ready for 3.12 even with Dalmatian | 16:46 |
ThiagoCMC | Nice link! | 16:47 |
noonedeadpunk | So really no idea how canonical are going to do that.... | 16:47 |
ThiagoCMC | Snaps... lol | 16:48 |
noonedeadpunk | lol | 16:51 |
noonedeadpunk | you need to pack code anyway | 16:51 |
noonedeadpunk | and code is not there | 16:51 |
noonedeadpunk | so if they fix all issues dowstream - well... | 16:51 |
noonedeadpunk | (and not contribute them back) | 16:51 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder master: Add variable to globally control notifications enablement https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/914100 | 18:03 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder master: Implement variables to address oslo.messaging improvements https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/914143 | 18:03 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder master: Add variable to globally control notifications enablement https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/914100 | 18:15 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder master: Implement variables to address oslo.messaging improvements https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/914143 | 18:15 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-plugins master: Rename _oslomsg_configure_* to _oslomsg_*_configure https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/914144 | 18:23 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder master: Add variable to globally control notifications enablement https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/914100 | 18:23 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_cinder master: Implement variables to address oslo.messaging improvements https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/914143 | 18:24 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-plugins master: Leave only unique policies for __mq_policies https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/914092 | 18:25 |
noonedeadpunk | it would be really nice if someone could have a quick look over this topic ^ and leave some comments before I proceed with more services | 18:25 |
noonedeadpunk | as I'm not 100% sure about some things I've mentioned earlier, but we kinda need that for 2024.1 I guess... | 18:26 |
noonedeadpunk | but that should be done with switching to quorum queues for sure... | 18:29 |
noonedeadpunk | as all these are breaking and would be good to be done just on fresh vhost, what we're doing with quorum migration | 18:30 |
opendevreview | James Denton proposed openstack/openstack-ansible-os_skyline master: A new override, `skyline_client_max_body_size`, has been introduced to support large image uploads via the Skyline dashboard. The default value of 1100M supports upstream Ubuntu and Rocky Linux images, but can be increased to support larger images or decreased to encourage the use of the CLI. https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/ | 18:57 |
opendevreview | James Denton proposed openstack/openstack-ansible-os_skyline master: Support large uploads via Skyline https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/914149 | 18:58 |
*** jamesdenton_ is now known as jamesdenton | 18:58 | |
noonedeadpunk | jamesdenton_: would be nice if you could review https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/912333 :) | 19:36 |
jamesdenton_ | oh surrrre | 19:37 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_skyline master: Add EL distro support https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/912370 | 19:38 |
noonedeadpunk | fwiw, this is also super close I guess https://review.opendev.org/c/openstack/openstack-ansible/+/859446 | 19:38 |
noonedeadpunk | the only thing I spotted lately, which I guess just skyline bug, is that it's not possible to create networks as user | 19:39 |
jamesdenton_ | hmm, i can try to verify in our env | 19:40 |
opendevreview | James Denton proposed openstack/openstack-ansible-os_skyline master: Support large uploads via Skyline https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/914149 | 19:40 |
noonedeadpunk | jamesdenton_: would be quite nice frankly speaking, as I'm a bit /o\ wtf | 19:41 |
jamesdenton_ | do you get an error? | 19:42 |
noonedeadpunk | well. when trying to create there're just no AZs | 19:42 |
jamesdenton_ | ahh, and does it error on no AZ selected? | 19:42 |
noonedeadpunk | and when trying to open some network created in horizon there's an error, yes | 19:42 |
jamesdenton_ | there is a more recent skyline patch for that | 19:42 |
noonedeadpunk | it just does not let you proceed | 19:43 |
jamesdenton_ | right, it was required with * | 19:43 |
jamesdenton_ | that has been fixed upstrea, | 19:43 |
noonedeadpunk | but you don't get error as admin trying to open networks | 19:43 |
noonedeadpunk | I guess I tried even with master like.... 3 days ago? | 19:43 |
jamesdenton_ | hmm | 19:43 |
noonedeadpunk | maybe missed smth ofc.... like skyline-console... hm | 19:43 |
noonedeadpunk | good point | 19:44 |
jamesdenton_ | yes, it's skyline-console | 19:44 |
noonedeadpunk | fwiw, in 859446 I made skyline on 80/443, while horizon works under /horizon on same ports | 19:46 |
jamesdenton_ | oh nice | 19:46 |
noonedeadpunk | does that sound like fair/logical thing? | 19:46 |
jamesdenton_ | absolutely | 19:46 |
noonedeadpunk | as I was not able to run skyline under /skyline as they do kinda hardcode things in static under console | 19:47 |
jamesdenton_ | it's wonky | 19:47 |
noonedeadpunk | as originally wanted to do /horizon and /skyline and then some "default" | 19:47 |
noonedeadpunk | yeah | 19:47 |
noonedeadpunk | jamesdenton_: nah, looks the same as before - just upgraded | 19:49 |
jamesdenton_ | ok hmm | 19:49 |
noonedeadpunk | when I enter network as user - it says `network 8a59dddc-94c0-45e7-aaea-0b900b582602 could not be found.` while it shows in list | 19:50 |
noonedeadpunk | and AZ - no data, while it's a required drop-down :( | 19:50 |
jamesdenton_ | well, i do show the networking availabilty zone is blank, but it doesn't stop me from creating a network or network+subnet | 19:50 |
jamesdenton_ | that's on the NETWORK wizard, right? | 19:50 |
noonedeadpunk | um.... | 19:52 |
jamesdenton_ | noonedeadpunk https://bugs.launchpad.net/skyline-console/+bug/2035012 | 19:52 |
noonedeadpunk | hm | 19:53 |
jamesdenton_ | https://review.opendev.org/c/openstack/skyline-console/+/895797 | 19:53 |
noonedeadpunk | yeah, looking at it already | 19:54 |
jamesdenton_ | i am also working on a patch to provide customizations - but the process itself is sorta ugly | 19:54 |
noonedeadpunk | I kinda wonder... where it ends up.... | 19:55 |
noonedeadpunk | or we're doing installation in a completely wrong way | 19:55 |
jamesdenton_ | Well, it doesn't lend itself to a lot of customizations, which is the first issue | 19:57 |
jamesdenton_ | you use 'yarn' to build it, which generates the static assets, including js and images | 19:57 |
noonedeadpunk | but this should have been released I guess.... | 19:58 |
jamesdenton_ | and then you pip install it when done | 19:58 |
noonedeadpunk | or well.... | 19:58 |
noonedeadpunk | I guess we're just trying to install as python package right now: https://opendev.org/openstack/openstack-ansible-os_skyline/src/branch/master/defaults/main.yml#L87-L89 | 19:59 |
jamesdenton_ | I guess you can replace the static image assets, which is what kolla seems to do | 19:59 |
jamesdenton_ | yes | 19:59 |
noonedeadpunk | so this jsut use outdated static? | 20:00 |
hamburgler | is the goal to always build it from source instead of already built python packages? When I had it deployed was just using the pip packages | 20:00 |
noonedeadpunk | hah, well. Now I see I guess.... | 20:01 |
jamesdenton_ | well, we're not trying to puild the python package, but build the actual skyline react files, i think | 20:01 |
noonedeadpunk | skyline_console was last updated 2y ago | 20:01 |
hamburgler | ahhh | 20:01 |
jamesdenton_ | We're installing from here AFAIK: https://github.com/openstack/skyline-console | 20:02 |
noonedeadpunk | yeah, but pip installs just this https://github.com/openstack/skyline-console/tree/master/skyline_console | 20:02 |
noonedeadpunk | which has nothing to do with reality | 20:02 |
jamesdenton_ | hmm | 20:03 |
noonedeadpunk | ok, gotcha, I will play with yarn tomorrow | 20:04 |
jamesdenton_ | cool, a 'yarn run build' did it for me. then i committed all to my fork and installed from that and it seemed to work well | 20:04 |
hamburgler | nginx being completely removed for osa deployment? | 20:05 |
jamesdenton_ | it's using nginx | 20:05 |
jamesdenton_ | in fact, i had to bump the upload size | 20:05 |
jamesdenton_ | https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/914149 | 20:06 |
opendevreview | James Denton proposed openstack/openstack-ansible-os_skyline master: Support large uploads via Skyline https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/914149 | 20:06 |
noonedeadpunk | hamburgler: well, we still use for repo server. and skyline atm... | 20:08 |
noonedeadpunk | and skyline have quite some assumptions about nginx.... | 20:08 |
noonedeadpunk | ugh, skyline jsut got slightly more complicated now :D | 20:10 |
noonedeadpunk | It takes ages to build it.... | 20:10 |
noonedeadpunk | and frankly - that feels like being a target for repo container..... | 20:11 |
noonedeadpunk | hm... and where 'yarn run build' does put the result? :) | 20:14 |
noonedeadpunk | ah | 20:26 |
noonedeadpunk | ugh, it really takes a while.... something smart should be done here for sure.... | 20:29 |
noonedeadpunk | ok, got it working :) thanks jamesdenton_ | 20:36 |
noonedeadpunk | and, I think I found how to make it work under /skyline.... | 20:40 |
noonedeadpunk | https://opendev.org/openstack/skyline-console/src/branch/master/config/webpack.prod.js#L45-L46 | 20:41 |
hamburgler | noonedeadpunk: rabbit changes look good to me :), already running QQ as you know, but I have no issue wiping vhost again anyways to use new updates for fanout etc. | 20:47 |
hamburgler | will just do off hours | 20:48 |
jamesdenton_ | woot | 21:01 |
opendevreview | Merged openstack/openstack-ansible-os_skyline master: Re-add Zuul testing to the project https://review.opendev.org/c/openstack/openstack-ansible-os_skyline/+/912333 | 21:27 |
jrosser | noonedeadpunk: I am sure that my first patches for skyline did the yarn build | 21:35 |
jrosser | they might be in the history | 21:35 |
jrosser | ah here we go https://github.com/jrosser/openstack-ansible-os_skyline/commit/82b1f5a5e6eff9df441c96677e0aa6d578bc8552#diff-7ae20663f88c2ee2e49e28cecf7c0eeb99efdb53ec0faf27c0a50ce3dcaf2370 | 22:12 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!