noonedeadpunk | damiandabrowski: can you kindly check https://review.opendev.org/c/openstack/openstack-ansible/+/887513 ? | 09:25 |
---|---|---|
damiandabrowski | noonedeadpunk: done | 09:34 |
opendevreview | Simon Hensel proposed openstack/openstack-ansible-os_cinder master: Reduce memory consumption in Cinder services https://review.opendev.org/c/openstack/openstack-ansible-os_cinder/+/888027 | 09:39 |
opendevreview | Danila Balagansky proposed openstack/openstack-ansible-os_ceilometer master: Enable Ceilometer resource cache https://review.opendev.org/c/openstack/openstack-ansible-os_ceilometer/+/888032 | 11:38 |
opendevreview | Merged openstack/openstack-ansible stable/2023.1: Bump SHAs for stable/2023.1 https://review.opendev.org/c/openstack/openstack-ansible/+/887513 | 11:51 |
mgariepy | https://stgraber.org/2023/07/10/time-to-move-on/ | 12:37 |
noonedeadpunk | yeah. actually moving lxd from linux containers to canonical directly is quite meh.... | 13:00 |
mgariepy | as i read it it was because of the resignation . | 13:01 |
noonedeadpunk | or maybe vice versa? | 13:02 |
noonedeadpunk | hard to know to be frank... | 13:03 |
noonedeadpunk | as not sure how resignation would help them maintaining lxd internally rather then part of linuxcontainers | 13:05 |
mgariepy | well the detail isn't quite important anyway. | 13:06 |
noonedeadpunk | like - we don't have huge in-house expert anymore, so let's move that fully internally :D | 13:06 |
mgariepy | but yeah, tl;dr; canonical wasn't the same compagny it was 10 years ago. | 13:06 |
mgariepy | the lxd teams were all canonical employee | 13:06 |
noonedeadpunk | yeah, that's true | 13:06 |
noonedeadpunk | but that comes for some openstack projects and redgat? | 13:07 |
noonedeadpunk | *redhat | 13:07 |
noonedeadpunk | and redhat is totally not the same company it was 2 years ago... | 13:07 |
mgariepy | yeah indeed. | 13:07 |
mgariepy | everything changes. | 13:08 |
noonedeadpunk | yeah and tendency looks like black days for opensource overall.... | 13:08 |
mgariepy | one thing is sure now is that it's better to be part of a community project that are not mostly maintained by 1 big corp that profit from it. | 13:08 |
noonedeadpunk | yeah, true | 13:09 |
mgariepy | https://lowlevel.store/products/everything-is-open-source-t-shirt | 13:12 |
mgariepy | haha | 13:12 |
mgariepy | any talk worth watching from the vancouver summit ? | 13:19 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Replace HA policies for RabbitMQ with quorum https://review.opendev.org/c/openstack/openstack-ansible/+/873618 | 13:33 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_nova master: Add quorum queues support for the service https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/887849 | 13:34 |
noonedeadpunk | mgariepy: actually summit was quite good on topics. But I think most interesting were forums :D | 13:35 |
noonedeadpunk | Or well, let's put it other way - I hardly was on sessions other then forums | 13:35 |
jamesdenton | wish i could've been there! | 13:35 |
noonedeadpunk | Like SCS folks got me super annoyed with their "the architecture" (and scs likely will become a thing in EU. | 13:36 |
jamesdenton | SCS? | 13:36 |
noonedeadpunk | https://scs.community/ | 13:36 |
noonedeadpunk | founded by german government and aimed for whole EU as a "standard for IaaS" | 13:37 |
jamesdenton | interesting | 13:37 |
noonedeadpunk | and they've built-in kolla as "the architecture" | 13:37 |
noonedeadpunk | maybe because stack-hpc also sponsored them - who knows | 13:38 |
noonedeadpunk | politics | 13:39 |
jamesdenton | seems Kolla probably has more weight behind it | 13:39 |
noonedeadpunk | but it was very small but very nice summit | 13:40 |
jamesdenton | are they yearly now? | 13:41 |
noonedeadpunk | yup | 13:41 |
noonedeadpunk | or well. knowbody know if next one will happen | 13:41 |
noonedeadpunk | or what format it will be in. | 13:42 |
jamesdenton | understandable | 13:42 |
noonedeadpunk | as there were a lot of discussions about having regional events | 13:42 |
mgariepy | wow. rockylinux base generic qcow 1.7g .. | 13:42 |
noonedeadpunk | and with having openinfra europe and openinfra asia it's even more in the air now then before | 13:43 |
noonedeadpunk | but yeah, it's a pity you wasn't there | 13:47 |
Moha | Is it possible to shutdown an instance safely from the linux host where controller is there? | 14:02 |
Moha | We have a compute that its disk has problem, but instances are running. we want to shutdown the compute server, but instances should be shutdown well to be sure of not corrupting the instance's filesystem. | 14:05 |
admin1 | yes | 14:14 |
admin1 | you can do a virsh shutdown | 14:14 |
admin1 | horizon might show it still running, but ignore that | 14:15 |
jrosser | you can probably also do the same using horizon as admin, or the openstack CLI | 14:15 |
admin1 | you have bigger issues at hand :) (disk issue) | 14:15 |
jrosser | Moha: can I ask what could be better about the upgrade instructions, in #openstack you felt they were "not mature"? | 14:16 |
Moha | jsrosser: I meant the lack of maturity of the document. Not written per service and per node. The rollback plan is not mentioned at per stage | 15:00 |
jrosser | Moha: 'per node' is entirely up to you to use ansible `--limit` as you see fit | 15:01 |
Moha | admin1: By virsh, or by `openstack ser stop`, Is it a graceful shutdown? | 15:01 |
jrosser | Moha: and 'per service' - you can run any sub-component of playbooks/setup-openstack.yml you want to, on their own | 15:02 |
Moha | KVM-solution-sides didn't work: I/O error! I was thinking of sigkill15 on instance process id! | 15:05 |
Moha | Jrosser: thanks for following up. I will consider those tags, testing on my lab | 15:07 |
jrosser | Moha: you need to take some care over the upgrade of the control plane, like the database and rabbitmq should be done as a cluster rather than with --limit usually | 15:07 |
jrosser | but of you have lots of computes, you can do them in batches or however else | 15:08 |
mgariepy | if you upgrade on multiple days your might need to refresh ansible fact cache. | 15:10 |
admin1 | Moha, virsh shutdown does a graceful systematic shutdown | 15:13 |
admin1 | if the api is broken or you cannot do a openstack graceful shutdown, a virsh shutdown is the best bet | 15:14 |
admin1 | but if the disk is the issue, then the io will not let any vms or the host to shutdown gracefully and becomes stuck .. in that case, trust the underlying journal of the filesystem and pull the cable :) | 15:15 |
noonedeadpunk | well, you might need to have quemu guest agent for some OS for shutdown to work, as Windows tend to wait for timeout and then getting destroyed | 15:35 |
mgariepy | not sure i should mention them here but : https://www.oracle.com/news/announcement/blog/keep-linux-open-and-free-2023-07-10/ | 17:27 |
jrosser | something something zfs blah blah | 17:34 |
mgariepy | lol, yep i know | 17:35 |
mgariepy | was moslty for entertainment | 17:36 |
noonedeadpunk | I'm about to think of adding Orcale linux images to our clouds though.... | 18:00 |
noonedeadpunk | like really ibm did great job for ppl thinking of Oracle as real alternative to rhel... | 18:01 |
noonedeadpunk | (in the opensource world) | 18:02 |
mgariepy | great job ibm | 18:03 |
noonedeadpunk | btw, quorum_queues looks like working overall: https://review.opendev.org/q/topic:osa%252Fquorum_queues | 18:03 |
noonedeadpunk | slightly nasty things though | 18:04 |
noonedeadpunk | like changing vhost template | 18:05 |
opendevreview | Merged openstack/openstack-ansible-haproxy_server master: Fix `regen pem` with `extra_lb_tls_vip_addresses` https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/887573 | 20:04 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!