opendevreview | Merged openstack/openstack-ansible master: Fix infra scenario repo server cluster https://review.opendev.org/c/openstack/openstack-ansible/+/826468 | 00:45 |
---|---|---|
*** anbanerj is now known as frenzyfriday | 05:42 | |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible stable/xena: Fix infra scenario repo server cluster https://review.opendev.org/c/openstack/openstack-ansible/+/826831 | 07:16 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible stable/wallaby: Fix infra scenario repo server cluster https://review.opendev.org/c/openstack/openstack-ansible/+/826832 | 07:16 |
opendevreview | Andrew Bonney proposed openstack/openstack-ansible-plugins master: Add ssh_keypairs role https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/825113 | 10:45 |
*** dviroel_ is now known as dviroel | 11:03 | |
opendevreview | Merged openstack/ansible-role-systemd_service master: Allow StandardOutput to be set for a systemd service https://review.opendev.org/c/openstack/ansible-role-systemd_service/+/826602 | 11:32 |
opendevreview | Merged openstack/openstack-ansible-os_nova master: Drop cell1 upgrade to template format https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/826504 | 11:34 |
opendevreview | Merged openstack/openstack-ansible-os_aodh stable/victoria: Ensure libxml2 is installed on debian systems https://review.opendev.org/c/openstack/openstack-ansible-os_aodh/+/826380 | 11:57 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-os_zun master: Restore CI jobs https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/824457 | 11:59 |
jrosser | not good progress here https://github.com/logan2211/ansible-etcd/pull/20 | 11:59 |
noonedeadpunk | should we pull it in opendev or jsut fork to some of our github? | 12:20 |
jrosser | fork would get us moving quickly | 12:21 |
jrosser | if we then moved it to opendev we could gate it against the zun role | 12:21 |
noonedeadpunk | yeah, let me fix that then | 12:21 |
jrosser | but this will take like 1 week minimum for all the setup | 12:22 |
noonedeadpunk | we can move to opendev from fork anytime as well :D | 12:22 |
jrosser | absolutely, yes | 12:22 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Fix tempest plugins versions https://review.opendev.org/c/openstack/openstack-ansible/+/826129 | 12:24 |
noonedeadpunk | I hope logan- doing fine after all... | 12:24 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Used forked etcd role https://review.opendev.org/c/openstack/openstack-ansible/+/826881 | 12:31 |
noonedeadpunk | jrosser: ^ | 12:31 |
andrewbonney | noonedeadpunk: I think there was a case of the 'static' keyword that also needed removing from the role | 12:32 |
andrewbonney | https://github.com/noonedeadpunk/ansible-etcd/blob/master/tasks/etcd_post_install_server.yml#L26 | 12:32 |
noonedeadpunk | mhm | 12:32 |
noonedeadpunk | Done | 12:34 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-os_zun master: Restore CI jobs https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/824457 | 12:35 |
jrosser | lets see.... | 12:35 |
jrosser | upgrade jobs may still break though | 12:35 |
noonedeadpunk | well yes, we'd need to backport this change and do bump for upgrade jobs | 12:52 |
opendevreview | Andrew Bonney proposed openstack/openstack-ansible-os_nova master: Use ssh_keypairs role to generate cold migration ssh keys https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/825306 | 13:04 |
opendevreview | Andrew Bonney proposed openstack/openstack-ansible-plugins master: Add ssh_keypairs role https://review.opendev.org/c/openstack/openstack-ansible-plugins/+/825113 | 13:05 |
admin1 | what could be the best way to override polling.yml in ceilometer so that it survives upgrades of osa ? | 14:19 |
noonedeadpunk | um, I though it has overrides var? | 14:19 |
noonedeadpunk | but I don't see it indeed... | 14:20 |
noonedeadpunk | ah lol I was looking in gnocchi) | 14:22 |
noonedeadpunk | there're 2 options - either provide pooling.yml file or define overrides | 14:22 |
noonedeadpunk | ceilometer_polling_default_file_path and ceilometer_polling_yaml_overrides control that | 14:22 |
spatel | admin1 are you using ceilometer with gnocchi. I am thinking to try out for my new cloud to see what advantage i can take | 14:30 |
noonedeadpunk | I was using it a lot one day | 14:30 |
noonedeadpunk | Billing was relying on this data | 14:31 |
noonedeadpunk | and it was pretty reliable | 14:31 |
noonedeadpunk | upgrades were disaster though as items could get renamed or changed design behind them | 14:32 |
noonedeadpunk | or jsut deprecated:) | 14:32 |
noonedeadpunk | But ceilometer is stable now as barely maintained | 14:32 |
noonedeadpunk | and for gnocchi you need some storage... Likely like redis... As rbd is kind of broken. Swift/S3 is fine though as well | 14:33 |
noonedeadpunk | (rbd broken by design) | 14:33 |
noonedeadpunk | as gnocchi tries to use rbd as pure rados, which results in thousands of small objects, so you actually don't expect to see rbd that much sliced which results in more tricky rebalances. And ofc ceph complains on pool having too many objects comparing to otheres | 14:35 |
noonedeadpunk | and pure rados is basically object storage, so with swift/s3 you're prepared for that workload | 14:35 |
admin1 | i am using gnocchi with redis + ceph | 14:37 |
admin1 | which reminds me, i also need to test that patch | 14:37 |
admin1 | will do next week | 14:37 |
admin1 | the gnocchi + redis patch | 14:37 |
noonedeadpunk | oh yes | 14:37 |
noonedeadpunk | would be awesome | 14:38 |
noonedeadpunk | https://review.opendev.org/c/openstack/openstack-ansible-os_gnocchi/+/822905 | 14:38 |
opendevreview | Merged openstack/openstack-ansible master: Fix infra upgrade https://review.opendev.org/c/openstack/openstack-ansible/+/826424 | 14:38 |
opendevreview | Merged openstack/openstack-ansible stable/wallaby: Fix infra scenario repo server cluster https://review.opendev.org/c/openstack/openstack-ansible/+/826832 | 14:39 |
opendevreview | Merged openstack/openstack-ansible stable/xena: Fix infra scenario repo server cluster https://review.opendev.org/c/openstack/openstack-ansible/+/826831 | 14:39 |
spatel | noonedeadpunk thanks for input | 14:47 |
spatel | i am not worried about billing etc.. because i run my own private cloud. I want some kind of monitoring and do capacity planning.. | 14:48 |
spatel | currently i have my own in-hour influx/grafana to see what is going on for capacity etc.. | 14:48 |
noonedeadpunk | well gnocchi grafana plugin is kind of mess | 14:49 |
spatel | something like what i have currently - https://ibb.co/7bNDsLB | 14:50 |
noonedeadpunk | well, we had graphs per vm from gnocchi to see who's consuming cpu/disk/memory/network | 14:53 |
spatel | how about compute capacity? | 14:53 |
spatel | could you share graph and please hide your PI stuff | 14:54 |
noonedeadpunk | Um, well, for compute it's not that great. OR well, it's not designed for compute monitoring a lot. Worth checking metrics it capable of gathering https://docs.openstack.org/ceilometer/latest/admin/telemetry-measurements.html | 14:55 |
noonedeadpunk | ah, well, unless you gather them with snmp) | 14:56 |
noonedeadpunk | but we didn't do that for computes | 14:56 |
noonedeadpunk | Unfortunatelly I don't have access to this environment for couple of years now :) | 14:56 |
spatel | no worry! | 14:57 |
spatel | in that case i don't want gnocchi.. :) i think i am happy with my inhouse monitoring | 14:57 |
spatel | by the way i have upgraded my W -> X | 14:59 |
spatel | now i am waiting for 24.1.0 to push out in production | 14:59 |
noonedeadpunk | ah, indeed, I should push releases I believe | 15:15 |
spatel | noonedeadpunk we are planning to build large store to store some small file like audio files.. what kind of storage i should pick ceph or glusterfs or swift ? i am looking for s3 style | 15:22 |
noonedeadpunk | ceph or swift I believe | 15:23 |
spatel | currently we are using AWS s3 but soon our data limit going to reach 50TB data per day and i don't want to spend money for that in AWS | 15:23 |
noonedeadpunk | both support s3 compatability | 15:23 |
spatel | which one would be good to manage and easy operation :) | 15:23 |
noonedeadpunk | ceph is probably closer to aws feature-wise | 15:23 |
spatel | ceph is little complex but good overall | 15:23 |
spatel | I need to get into more details to see what storage i should be using | 15:25 |
noonedeadpunk | well it depends likely:) for me ceph is more common solution. But I just not really expert is swift as never used it in production or at scale | 15:25 |
spatel | agreed also very few folks using swift now days | 15:25 |
noonedeadpunk | and ceph is not really that complicated. But yes, it needs some understanding of how it works | 15:25 |
spatel | i have one more question related our glance storage.. | 15:26 |
noonedeadpunk | they both have compatbility matrix https://docs.openstack.org/swift/latest/s3_compat.html https://docs.ceph.com/en/latest/radosgw/s3/#features-support | 15:26 |
noonedeadpunk | and ceph also support Swift API just in case ;) | 15:26 |
spatel | can we add glusterfs for glance in 3 node deployment when you don't need big storage for images... | 15:26 |
spatel | +1 i will check feature list and compare | 15:27 |
spatel | currently in my deployment i don't have shared storage for glance (we have only handful images, like gold image etc) | 15:28 |
noonedeadpunk | um, for gluster - isn't it just matter of mounting filesystem? | 15:28 |
spatel | i don't want to create dedicated shared storage for it so thinking why don't we install or add-on service to install small glusterfs inside glance for shared storage | 15:28 |
spatel | Yes.. | 15:29 |
noonedeadpunk | like if you mount it to /var/lib/glance/images/ that would be it? | 15:29 |
spatel | currently i am using rsync kind of tool to sync my glance images | 15:29 |
spatel | Yes mounting /var/lib/glance/images/ | 15:29 |
spatel | i found glusterfs is very simple to deploy so what if we have that add-on in OSA to say YES i need glusterfs for glance | 15:30 |
noonedeadpunk | well, I'd suggest then having ceph - it would solve all these issues :D | 15:30 |
spatel | for ceph i need mon nodes and OSD etc.. | 15:30 |
spatel | its too complicated to store 4 images :) | 15:30 |
noonedeadpunk | for gluster you also need drives? | 15:31 |
spatel | no you can do DD image file | 15:31 |
spatel | and mount it using /dev/loop | 15:31 |
noonedeadpunk | and what stopps yo do same for ceph osd?:) | 15:31 |
*** dviroel is now known as dviroel|lunch | 15:31 | |
spatel | you don't need mon node | 15:31 |
noonedeadpunk | as it eventually jsut need lvm pv drive | 15:31 |
noonedeadpunk | have you ever seen how we deploy ceph in aio? :p | 15:32 |
spatel | i need simple solution. gluster you can install with single RPM and it replicate file between all 3 nodes | 15:32 |
spatel | :D | 15:32 |
spatel | i know it also use dd image stuff | 15:32 |
noonedeadpunk | it 3 loop drives, mon inside lxc container, and osd on controller itself | 15:33 |
noonedeadpunk | but yeah | 15:33 |
spatel | one of my old environment i have install glusterfs on infra nodes and replicating my glance images | 15:33 |
noonedeadpunk | so for gluster, I think that it doesn't require any intervention into glance role for sure | 15:33 |
noonedeadpunk | it could be something for ops repo, yes | 15:33 |
noonedeadpunk | but tbh, with this approach it easier to have just one nfs server somewhere | 15:34 |
noonedeadpunk | even more simple | 15:34 |
spatel | also good think glusterfs keep actual files on volume so easy to read and debug while ceph chunk them out.. | 15:34 |
spatel | one NFS is single point of failure :( | 15:35 |
noonedeadpunk | there's `rbd map` which gives you block drive you can mount or do whatever you want with it | 15:36 |
spatel | I was just thinking to have that option but no big deal.. anyway i am deploying it my way to fulfill need.. | 15:36 |
spatel | i am deploying small small openstack where i don't need lots images so was looking for something simple and easy which i don't need to worry about during upgrade process (ceph is sometime messy when it comes to upgrade) | 15:37 |
noonedeadpunk | so eventually, I think solution would be just to use `glance_nfs_client` to define glusterfs mount. It's just a variable that is passed to systemd_mount. | 15:38 |
spatel | while gluster is single RPM binary and all done :) | 15:38 |
noonedeadpunk | we might want to rename it though, as current;y it's confusing | 15:38 |
noonedeadpunk | btw, you can also use s3fs ;) | 15:38 |
noonedeadpunk | and store your 3 images on AWS | 15:38 |
noonedeadpunk | that would be slow though | 15:39 |
noonedeadpunk | oh, eventually, glance supports s3 as a storage as well | 15:39 |
spatel | hmm does it support in OSA ? | 15:39 |
noonedeadpunk | I bet we merged fix recently for that | 15:40 |
spatel | i am asking s3fs application? | 15:40 |
noonedeadpunk | https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/822870 | 15:40 |
noonedeadpunk | s3fs is supported by systemd_mount, yes | 15:40 |
noonedeadpunk | so you can leverage `glance_nfs_client` to get it mounted | 15:41 |
noonedeadpunk | But it's weird considering you can jsut use s3 directly | 15:41 |
spatel | hmm look like i need to read first :) if this is easy then i like it for small deployment requirement | 15:41 |
spatel | worth testing out in my lab | 15:41 |
noonedeadpunk | so we define this way for nfs https://opendev.org/openstack/openstack-ansible/src/branch/master/tests/roles/bootstrap-host/templates/user_variables_nfs.yml.j2#L3-L9 | 15:42 |
noonedeadpunk | but for s3 here's sample https://opendev.org/openstack/ansible-role-systemd_mount/src/branch/master/defaults/main.yml#L63-L67 | 15:42 |
noonedeadpunk | well, just respect variable name mapping https://opendev.org/openstack/openstack-ansible-os_glance/src/branch/master/tasks/glance_post_install.yml#L172-L176 | 15:43 |
spatel | let me understand, this will keep my image in AWS? | 15:44 |
spatel | trying to understand what will be my backend storage ? | 15:44 |
noonedeadpunk | ah, yes, it won't work for s3fs in this shape :( | 15:44 |
noonedeadpunk | yes, that would keep image in aws | 15:44 |
noonedeadpunk | or any s3 storage you pick | 15:44 |
noonedeadpunk | for it to work https://opendev.org/openstack/openstack-ansible-os_glance/src/branch/master/tasks/glance_post_install.yml#L172-L176 needs to be patched | 15:45 |
spatel | i want to keep it local :( | 15:45 |
noonedeadpunk | as it would need credentials option and other way of defining `what` | 15:45 |
spatel | no more external cloud dependency | 15:45 |
noonedeadpunk | but well, you want local s3? | 15:46 |
spatel | local s3 or anything which replicated to make my images available on all glance.. | 15:46 |
noonedeadpunk | so why not to use local s3 that you want to have anyway for images as well? | 15:47 |
spatel | NFS is bad bc of single point of failure | 15:47 |
spatel | ceph is good but too many components and upgrade is mess | 15:47 |
noonedeadpunk | well, nfs can be on gluster, but that's weird :) | 15:47 |
spatel | rsync is good but hackie | 15:47 |
spatel | lsync also good solution but not going to work in this case | 15:48 |
noonedeadpunk | I can't say ceph upgrade is a mess. It was like that 5-7 years ago, on Hammer (0.8) | 15:48 |
jrosser | images are cached on the compute node too? | 15:48 |
noonedeadpunk | Now it's quite straightforward imo - matter of package upgrade and service restart | 15:48 |
jrosser | maybe just simplify by saying that partially mitigates nfs server SPOF | 15:48 |
noonedeadpunk | yes, for sure they would be cached there... | 15:48 |
spatel | i did tried last time and it was epic disaster for me (may be my knowledge is limited) | 15:49 |
jrosser | so really if you are clever with the caching time, you can live with nfs being broken for a while | 15:49 |
noonedeadpunk | spatel: do you have cinder volumes anywhere? | 15:49 |
spatel | jrosser agreed if image is cached and NFS is down then i have time to back it up.. | 15:49 |
spatel | no cinder | 15:50 |
noonedeadpunk | ok :( | 15:50 |
jrosser | this is the same problem as the repo server | 15:50 |
spatel | My usage case is just CPU and memory.. we do audio processing and don't keep any single bits or bytes on disk (so storage has zero requirement in my cloud) | 15:50 |
noonedeadpunk | so well, I think what we need to do anyway - refactor glance_nfs_client variable to allow us same set of options that are provided by systemd_mount | 15:51 |
noonedeadpunk | with that - you can set any mount point - would it be gluster, s3, nfs - anything that role can handle | 15:52 |
spatel | i need small storage for my glance to keep 2 or 3 images to deploy distro and we mostly don't deal with snapshot etc.. so glance is for just image | 15:52 |
noonedeadpunk | for gluster deployment - dunno, I'd say it's stuff for ops. Like create containers for it or smth, configure, etc | 15:53 |
spatel | currently i am installing glusterfs on infra node and mounting filesystem in glance | 15:53 |
noonedeadpunk | likely patch to systemd_mount as well if it's not supporting gluster now or requires some extra packages for that | 15:54 |
spatel | i was thinking why don't i just install glusterfs-server inside glance | 15:54 |
spatel | we have 2TB disk on infra so more than enough for keeping 4 distro images :) | 15:54 |
spatel | Let me trying systemd_mount and see otherwise i will keep my hacks running.. no big deal.. (just trying to avoid ceph but if ceph is best solution then i may adopt it) | 15:55 |
noonedeadpunk | um, when you have image (amd image in raw format) and nova ephemeral drives on ceph, vm creation is almost instant, as it doesn't require copying data from glance container to compute, converting image format, etc. | 15:57 |
noonedeadpunk | Just block device instantly created out of image as of snapshot and ready to be used | 15:57 |
noonedeadpunk | well yes, when you have like 4 images it might be overkill as they just could be cached | 15:58 |
noonedeadpunk | and you still want to use local storage for computes | 15:58 |
spatel | +1 if using ceph for everything but using for just glance would be overkill for small environment | 15:59 |
admin1 | spatel, ceph with EC .. | 16:06 |
spatel | EC? | 16:06 |
*** dviroel|lunch is now known as dviroel | 16:23 | |
noonedeadpunk | erasure coded pool | 16:40 |
noonedeadpunk | but I hope it's bad joke :D | 16:40 |
opendevreview | Merged openstack/openstack-ansible-os_tempest stable/victoria: Remove tempestconf centos-8 job https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/826697 | 16:54 |
spatel | lol | 17:23 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: [docs] Use extlink for deploy guide references https://review.opendev.org/c/openstack/openstack-ansible/+/826923 | 17:24 |
noonedeadpunk | or well, I'm too noob in ceph to understand how to make it somehow reliable... | 17:28 |
spatel | Ceph is really good for very large environment but its overhead for 3 node deployment :( | 17:31 |
spatel | if you have 30 node cluster then it do very good job with performance and healing but with 3 node it won't justify that performance | 17:32 |
spatel | This patch should get merge ASAP correct? - https://review.opendev.org/c/openstack/openstack-ansible/+/826782 | 17:33 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Used forked etcd role https://review.opendev.org/c/openstack/openstack-ansible/+/826881 | 18:01 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Used forked etcd role https://review.opendev.org/c/openstack/openstack-ansible/+/826881 | 18:01 |
opendevreview | Merged openstack/openstack-ansible master: Clarify major upgrade documentation for updating internal CA https://review.opendev.org/c/openstack/openstack-ansible/+/826782 | 18:15 |
opendevreview | Merged openstack/openstack-ansible master: Clarify the difference between generating and regenerating certificates https://review.opendev.org/c/openstack/openstack-ansible/+/826786 | 18:17 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible stable/xena: Clarify major upgrade documentation for updating internal CA https://review.opendev.org/c/openstack/openstack-ansible/+/826844 | 18:17 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server master: Use cloudsmith repo for rabbit and erlang https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/826444 | 18:19 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server master: Allow different install methods for rabbit/erlang https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/826445 | 18:19 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-rabbitmq_server master: Update used RabbitMQ and Erlang https://review.opendev.org/c/openstack/openstack-ansible-rabbitmq_server/+/826446 | 18:20 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_zun master: Update Zun api-paste https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/822847 | 18:21 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: [docs] Use extlink for deploy guide references https://review.opendev.org/c/openstack/openstack-ansible/+/826923 | 18:22 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Used forked etcd role https://review.opendev.org/c/openstack/openstack-ansible/+/826881 | 18:22 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Used forked etcd role https://review.opendev.org/c/openstack/openstack-ansible/+/826881 | 18:22 |
noonedeadpunk | hm, gerrit feel broken after upgrade with regards to missing marks if patch is relevant and rebase menu when patch already on top of another... | 18:23 |
jrosser | i wonder if thats no longer default-to-on feature | 18:24 |
noonedeadpunk | well, knowing gerrit flexability, I kind of doubt google moved that to some config... | 18:26 |
noonedeadpunk | but yeah. Worth bugging infra likely... | 18:26 |
jrosser | one for #opendev i guess | 18:26 |
noonedeadpunk | yep... but on Monday :) | 18:27 |
admin1 | 24.0.0 .. glance .. is using nfs .. mount is fine .. root and glance can touch/create files ... when i try to upload file, the api error is: Error in store configuration. Adding images to store is disabled.: glance_store.exceptions.StoreAddDisabled: Configuration for store failed. Adding images to this store is disabl | 18:38 |
admin1 | the config is like this: https://pastebin.com/raw/WtVve8m9 | 18:40 |
admin1 | default_backend = file and path is correct | 18:43 |
jrosser | the indentation there looks wrong | 18:45 |
jrosser | oh wait | 18:45 |
jrosser | no its ok | 18:45 |
admin1 | i am going to nuke the containers and re-deploy and test again | 18:48 |
opendevreview | Merged openstack/openstack-ansible-haproxy_server master: Adjust default configuration to support TLS v1.3 https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/823944 | 18:56 |
admin1 | worked in the 3rd try .. not sure what the issue was .. | 19:36 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-os_zun master: Update Zun api-paste https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/822847 | 20:33 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-os_zun master: Refactor use of include_vars https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/824313 | 20:36 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible stable/wallaby: Remove left-over centos-8 job from project template https://review.opendev.org/c/openstack/openstack-ansible/+/826937 | 20:43 |
*** dviroel is now known as dviroel|brb | 20:46 | |
opendevreview | Merged openstack/openstack-ansible stable/xena: Clarify major upgrade documentation for updating internal CA https://review.opendev.org/c/openstack/openstack-ansible/+/826844 | 20:55 |
opendevreview | Merged openstack/openstack-ansible-os_glance master: Use common service setup tasks from a collection rather than in-role https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/824397 | 21:21 |
opendevreview | Jonathan Rosser proposed openstack/openstack-ansible-os_zun master: Use common service setup tasks from a collection rather than in-role https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/824372 | 21:44 |
opendevreview | Merged openstack/openstack-ansible-os_octavia master: Use common service setup tasks from a collection rather than in-role https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/824379 | 22:24 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!