*** CeeMac has quit IRC | 00:12 | |
*** priteau has quit IRC | 02:03 | |
*** rh-jelabarre has quit IRC | 02:21 | |
*** evrardjp has quit IRC | 02:33 | |
*** evrardjp has joined #openstack-ansible | 02:33 | |
openstackgerrit | Xinxin Shen proposed openstack/openstack-ansible-os_masakari master: setup.cfg: Replace dashes with underscores https://review.opendev.org/c/openstack/openstack-ansible-os_masakari/+/788399 | 03:19 |
---|---|---|
*** miloa has joined #openstack-ansible | 03:37 | |
*** miloa has quit IRC | 03:59 | |
*** shyamb has joined #openstack-ansible | 05:32 | |
*** pto has joined #openstack-ansible | 06:30 | |
*** luksky has joined #openstack-ansible | 06:34 | |
*** gyee has quit IRC | 06:35 | |
*** gokhani has joined #openstack-ansible | 07:11 | |
*** andrewbonney has joined #openstack-ansible | 07:15 | |
gokhani | Hi folks, I deployed OSA (22.1.0) with ceph integration. Ceph cluster and cinder integration works properly, but on swift side I am getting 401 unauthorized errors. radosgw and keystone connection seems broken. I added these rows in user_variables.yml >> http://paste.openstack.org/show/804817/ . this file shows radosgw logs > | 07:18 |
gokhani | http://paste.openstack.org/show/804818/ Dou you have any ideas about this problem? May be I am missing something to do | 07:18 |
* recyclehero fianlly sees some movement on the channel | 07:23 | |
*** shyamb has quit IRC | 07:24 | |
gokhani | and also it seems radosgw behind haproxy doen't work properly. Some requests are failing > http://paste.openstack.org/show/804819/ | 07:26 |
*** shyamb has joined #openstack-ansible | 07:36 | |
*** rpittau|afk is now known as rpittau | 07:36 | |
noonedeadpunk | let me see | 07:36 |
noonedeadpunk | did it end up in proper ceph.conf? | 07:37 |
noonedeadpunk | Also I think you might be missing `rgw_frontends = civetweb port=$LOCAL_IP:$RGW_PORT` | 07:39 |
noonedeadpunk | fwiw, these are defaults https://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/group_vars/ceph-rgw.yml so wondering why you decided to override them? | 07:40 |
noonedeadpunk | Can you share your actual ceph.conf? As I'd suggest that there are extras, since these are only overrides | 07:45 |
recyclehero | hi, my nfs backend was a raid array. there was some problems with the array and I had to use backups and create a new array. | 07:48 |
recyclehero | now some instances went to grub resuce | 07:48 |
recyclehero | and the drives owner and group are wierd in the nfs backedn | 07:49 |
recyclehero | -rw-r--r-- 1 64055 64055 16106127360 Apr 28 02:45 volume-501dd125-f8b8-4e6f-b142-15ada81c68ad | 07:49 |
noonedeadpunk | sorry, no idea how can help here :( | 07:50 |
recyclehero | how can I enable logging for the lxc containers? I dont see their logs in the lxc host. | 07:52 |
recyclehero | the folder /var/log/lxc is there but their atime is for way back | 07:54 |
*** mathlin has joined #openstack-ansible | 07:55 | |
recyclehero | should nfs share owner:group and mode be something specefic? | 07:57 |
recyclehero | they are libvirt-qemu:libvirt-qemu on the compute host and 64055:64055 on the nfs backend | 07:59 |
recyclehero | is this normal? | 07:59 |
noonedeadpunk | recyclehero: yes, last part is normal, since permissions should be meaningfull on hosts at the first place. But with that, all hosts should have same uids and guids then | 08:05 |
recyclehero | noonedeadpunk: noonedeadpunk: now that the instances are shutoff they are back to root:root 644 on the nfs host | 08:09 |
recyclehero | noonedeadpunk: do you know how to see lxc containers logs | 08:10 |
recyclehero | I want to see cinder logs | 08:10 |
*** ajg20 has quit IRC | 08:12 | |
noonedeadpunk | hm, that's good question. I really thought they should be in /var/log/lxc, but don't see them there... So checking it as well | 08:12 |
noonedeadpunk | I guess they're in journald as well... | 08:12 |
gokhani | noonedeadpunk, I find the error. It configure ceph.conf wrong on rgw containers. it creates 2 section on cepph.conf like [client.rgw.test-infra1-ceph-rgw-container-26ac4587.rgw0] and [client.radosgw.test-infra1-ceph-rgw-container-26ac4587] and it adds keystone variable under the [client.radosgw.test-infra1-ceph-rgw-container-26ac4587]. But it need | 08:13 |
gokhani | to add them under the [client.radosgw.test-infra1-ceph-rgw-container-26ac4587] | 08:13 |
gokhani | *But it need to add them under the [client.radosgw.test-infra1-ceph-rgw-container-26ac4587.rgw0] | 08:14 |
noonedeadpunk | It actually depends on the systemd name of the service | 08:14 |
gokhani | I don't know this is ceph-ansible bug | 08:14 |
noonedeadpunk | in OSA we have exactly that override (with rgw0) as default one | 08:14 |
noonedeadpunk | so that's kind of why I asked you regarding need of the override | 08:15 |
recyclehero | noonedeadpunk: thanks atleast there are some logs in the lxc container itself with journalctl | 08:16 |
noonedeadpunk | so, um, you needed service logs inside container or about lxc service itself? | 08:17 |
noonedeadpunk | ie your container does not start and you whant to know why? | 08:18 |
noonedeadpunk | as all service logs inside container are in journalctl | 08:18 |
noonedeadpunk | gokhani: https://opendev.org/openstack/openstack-ansible/src/branch/master/inventory/group_vars/ceph-rgw.yml#L3 | 08:18 |
noonedeadpunk | so I think it's expected thing from ceph-ansible side | 08:19 |
recyclehero | noonedeadpunk: I was expecting to see cinder_volume service logs on the lxc host located at /var/log/lxc/cinder_colume..... but maybe I was wrong and maybe those logs are something else and are related to the container creation. cuz they are dated back to nov21. I think thats when I launched. | 08:21 |
noonedeadpunk | recyclehero: yeah, all service logs are in journald nowadays | 08:23 |
gokhani | noonedeadpunk, gotcha:( reason of overriding ceph.conf is following this guide > https://docs.openstack.org/openstack-ansible/latest/user/ceph/swift.html | 08:23 |
noonedeadpunk | Would be great if you could suggest patch to clear the things once you get things working | 08:24 |
recyclehero | I have this regarding nfs permission on nfs logs | 08:26 |
recyclehero | https://dpaste.com/6HJN4JUNS | 08:26 |
recyclehero | I dont know what was my previous config | 08:26 |
recyclehero | now its root:root 644 | 08:26 |
gokhani | noonedeadpunk, ok firstly ı will test up to date variables | 08:28 |
*** gokhani has quit IRC | 08:31 | |
*** gokhani has joined #openstack-ansible | 08:37 | |
*** fridtjof[m] has quit IRC | 08:41 | |
*** manti has quit IRC | 08:41 | |
*** shyamb has quit IRC | 08:48 | |
*** fridtjof[m] has joined #openstack-ansible | 08:52 | |
pto | I know its a little OT, but have anyone here used rbd export or rbd export-diff? | 08:57 |
noonedeadpunk | I was only thinkinking about it's usage for some scenarios, but never really did | 09:07 |
*** priteau has joined #openstack-ansible | 09:22 | |
*** manti has joined #openstack-ansible | 09:22 | |
noonedeadpunk | hm, what's wrong with journald log collection https://zuul.opendev.org/t/openstack/build/cd03de10a58848dfad282a8a1843c388/log/job-output.txt#26710 | 09:42 |
noonedeadpunk | um, `capture_output` is py38 only? | 09:43 |
*** tosky has joined #openstack-ansible | 09:45 | |
noonedeadpunk | 3.7+ | 09:46 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Make journal_dump py3.6 compatable https://review.opendev.org/c/openstack/openstack-ansible/+/788465 | 09:55 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_manila master: [goal] Deprecate the JSON formatted policy file https://review.opendev.org/c/openstack/openstack-ansible-os_manila/+/782244 | 09:56 |
pto | noonedeadpunk: I need to migrate an old ceph cluster to a new one, and i have been playing with rbd export and rbd export-diff - the performance is horrible slow | 10:10 |
openstackgerrit | Gökhan proposed openstack/openstack-ansible master: Update ceph.conf for OpenStack-RadosGW integration https://review.opendev.org/c/openstack/openstack-ansible/+/788470 | 10:44 |
*** pcaruana has quit IRC | 10:52 | |
*** pcaruana has joined #openstack-ansible | 11:02 | |
admin0 | i could not find the rocky (16) -> stein (18) upgrade nodes .. if anyone has it , can you please pass it.. so that i can save those | 11:23 |
admin0 | gokhani, when you have it working, please pass me the notes as well .. i have ceph with rgw and osa .. still unclear on how exactly to add endpoint in keystone for it to work like swift | 11:24 |
noonedeadpunk | yeah, in docs they're broken for some reason :/ | 11:26 |
admin0 | there was also a etherpad link i think | 11:27 |
gokhani | admin0, yes it is working now. OSA itself adds endpoints in https://opendev.org/openstack/openstack-ansible/src/branch/master/playbooks/ceph-rgw-install.yml#L16 | 11:35 |
admin0 | gokhani, you used osa+ceph playbook ( single playbook ) or osa separately and ceph-ansible separately | 11:36 |
gokhani | admin0, I used osa+ceph not seperately | 11:36 |
gokhani | admin0 there is a bug in https://docs.openstack.org/openstack-ansible/latest/user/ceph/swift.html. If you want to use S3 api also change "client.rgw.{{ hostvars[inventory_hostname]['ansible_hostname'] }}" to "client.rgw.{{ hostvars[inventory_hostname]['ansible_hostname'] }}.rgw0" | 11:42 |
admin0 | gokhani, i have a few clusters with osa separate and ceph separate .. if you can share me your openstack service list and openstack endpoint list ( only relevant to s3/swift/ceph) .. that would help | 11:43 |
admin0 | i would know how to add it to keystone | 11:43 |
gokhani | admin0 , http://paste.openstack.org/show/804824/ | 11:46 |
admin0 | gokhani, thank you .. with this, i can try to add it and see how it goes | 11:47 |
admin0 | noonedeadpunk, if you have the xenial -> bionic nodes url at hand, can you pass it to me once | 11:48 |
*** rh-jelabarre has joined #openstack-ansible | 11:49 | |
gokhani | admin0, ok good luck:) If you need I can also share my ceph.conf. | 11:50 |
noonedeadpunk | ah, you meant X->B upgrade | 11:56 |
noonedeadpunk | https://docs.openstack.org/openstack-ansible/rocky/admin/upgrades/distribution-upgrades.html | 11:57 |
noonedeadpunk | But I think I dropped etherpad from favorites once this got merged | 11:58 |
noonedeadpunk | https://etherpad.openstack.org/p/osa-rocky-bionic-upgrade | 11:59 |
noonedeadpunk | found it | 11:59 |
openstackgerrit | Gökhan proposed openstack/openstack-ansible master: Update ceph.conf for OpenStack-RadosGW integration https://review.opendev.org/c/openstack/openstack-ansible/+/788470 | 12:18 |
gokhani | noonedeadpunk ^^ these are for OpenStack - RadosGW integration. these confs are working. | 12:21 |
noonedeadpunk | gokhani: can we leave `_`? I'm 99,9% believe that it's super fine having `rgw_keystone_url` | 12:22 |
noonedeadpunk | (and etc) | 12:23 |
noonedeadpunk | or can you check this out? | 12:24 |
gokhani | noonedeadpunk, yes this definition (with '_') is also working. I only updated them according to ceph docs https://docs.ceph.com/en/latest/radosgw/keystone/ | 12:25 |
admin0 | c3 = primary host ( vip is there) ; c3 repo = /etc/lsyncd/lsyncd.conf.lua .. the step says "disable non primary repo all-back" -- means I can reinstall c1 and c2 ( as its not primary) and then because repo is on c3, i need to disable c1 and c2 repo conainers ? | 12:25 |
*** gokhani has quit IRC | 12:28 | |
*** gokhani has joined #openstack-ansible | 12:28 | |
noonedeadpunk | gouthamr: would you mind a bit editing your patch? | 12:29 |
gouthamr | noonedeadpunk: hey! Which one? | 12:30 |
noonedeadpunk | gouthamr: sorry, meant gokhani, but he just left :( | 12:31 |
noonedeadpunk | false ping | 12:31 |
gouthamr | ah, np | 12:31 |
*** gokhani has quit IRC | 12:31 | |
*** gokhani has joined #openstack-ansible | 12:32 | |
gokhani | noonedeadpunk , yes I can edit | 12:32 |
noonedeadpunk | just posted comment | 12:36 |
openstackgerrit | Gökhan proposed openstack/openstack-ansible master: Update rgw client definition for OpenStack-RadosGW integration https://review.opendev.org/c/openstack/openstack-ansible/+/788470 | 12:38 |
noonedeadpunk | gokhani: can you check comments in the gerrit?:) | 12:39 |
gokhani | noonedeadpunk , I changed it with previous _:( I now see your comments | 12:39 |
noonedeadpunk | you can leave _ - whatever | 12:39 |
mgariepy | anyone knows if the ephemeral disk filename is supposed to be stored in the DB? | 12:40 |
noonedeadpunk | I just realized it might be good idea to just inlcude these group_vars | 12:40 |
noonedeadpunk | I think it's not? | 12:41 |
mgariepy | i have an old cloud and for some reason out of 120 vms 4 have the ephemeral disk names `disk.local` instead of `disk.eph0` | 12:41 |
noonedeadpunk | But it's supposed to be stored in file... | 12:42 |
mgariepy | yes | 12:42 |
mgariepy | i know but the filename is kinda just weird. | 12:42 |
mgariepy | i don't get why. | 12:42 |
mgariepy | same filetype and all. | 12:42 |
noonedeadpunk | ah, it's not really ephemeral I guess | 12:44 |
mgariepy | yes it's a qcow file on the compute. | 12:44 |
noonedeadpunk | I think it's just not volume based instances? | 12:44 |
mgariepy | yes | 12:45 |
mgariepy | using local raid0 drives.. | 12:45 |
noonedeadpunk | I mean it's kind of expected naming by nova | 12:46 |
noonedeadpunk | https://opendev.org/openstack/nova/src/branch/master/doc/source/reference/block-device-structs.rst#libvirt-driver-specific-bdm-data-structures | 12:46 |
noonedeadpunk | `The flavor-defined ephemeral disk` will be `disk.local` | 12:46 |
openstackgerrit | Gökhan proposed openstack/openstack-ansible master: Update rgw client definition for OpenStack-RadosGW integration https://review.opendev.org/c/openstack/openstack-ansible/+/788470 | 12:47 |
noonedeadpunk | btw `disk_info` is not really reliable and I don't think it has real effect onsmth | 12:47 |
mgariepy | well i am wondering when i migrate the vms with the block devices if it will break :D | 12:48 |
mgariepy | all other vms with a similar setup do have a disk.eph0 not disk.local :/ | 12:48 |
noonedeadpunk | I think we have most instances with jsut `disk` and super rarely with disk.eph0 | 12:49 |
mgariepy | yeah the vda/root disk is disk. | 12:49 |
mgariepy | but i do have falvors with epehemeral disk to be used as local scratch space (or more like persistent storage until one of the multiple drive fail) | 12:50 |
*** johanssone has quit IRC | 12:51 | |
mgariepy | well. it's supposed to be ephemeral anyway. so. let's see how it goes .. | 12:52 |
noonedeadpunk | yeah sorry, dunno.... | 12:53 |
mgariepy | well it's only 4 vms. | 12:53 |
mgariepy | and most likely only one.. if it fail badly i'll dig a bit more .. | 12:53 |
*** johanssone has joined #openstack-ansible | 12:54 | |
jrosser | admin0: the OSA playbooks should add the radosgw stuff to keystone even for external ceph integration | 12:55 |
openstackgerrit | Gökhan proposed openstack/openstack-ansible master: Update rgw client definition for OpenStack-RadosGW integration https://review.opendev.org/c/openstack/openstack-ansible/+/788470 | 12:56 |
*** spatel_ has joined #openstack-ansible | 12:56 | |
*** spatel_ is now known as spatel | 12:56 | |
*** pto has quit IRC | 13:05 | |
*** macz_ has joined #openstack-ansible | 13:05 | |
openstackgerrit | Gökhan proposed openstack/openstack-ansible master: Update rgw client definition for OpenStack-RadosGW integration https://review.opendev.org/c/openstack/openstack-ansible/+/788470 | 13:11 |
openstackgerrit | Gökhan proposed openstack/openstack-ansible master: Update rgw client definition for OpenStack-RadosGW integration https://review.opendev.org/c/openstack/openstack-ansible/+/788470 | 13:14 |
*** johanssone has quit IRC | 13:21 | |
*** pto has joined #openstack-ansible | 13:26 | |
*** johanssone has joined #openstack-ansible | 13:28 | |
*** pto has quit IRC | 13:30 | |
*** LowKey has quit IRC | 13:44 | |
*** LowKey[A] has joined #openstack-ansible | 13:45 | |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_neutron master: Use neutron_conf_dir for absent policy removal https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/788499 | 14:17 |
*** dave-mccowan has joined #openstack-ansible | 14:36 | |
dmsimard | noonedeadpunk, sshnaidm: you both mentioned roles in the openstack collection from PTG discussions, out of curiosity I was wondering what kind of roles you had in mind ? | 14:43 |
dmsimard | it's worth considering that since roles will be in the collection, they would also ship out of the ansible community package (ansible on pypi) so it could open opportunities | 14:44 |
sshnaidm | dmsimard, some useful roles that use modules and will be easier to consume them, also will server as module usage example | 14:44 |
sshnaidm | s/server/serve/ | 14:44 |
sshnaidm | dmsimard, if you know "terraform modules", so something like that | 14:44 |
dmsimard | so say, a role to create a server for example ? taking care of creating a flavor, a ssh key, the instance, security groups and stuff ? | 14:44 |
sshnaidm | dmsimard, yep, most common scenarios, easy to configure | 14:45 |
dmsimard | makes sense, thanks | 14:45 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_neutron master: DNM Change task ordering to perform smooth upgrades https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/788501 | 14:49 |
noonedeadpunk | jrosser: I feel like we need smth like that for all roles ^ | 14:54 |
noonedeadpunk | because right now we symlink empty directory at the beginning of the play in pre_tasks | 14:56 |
noonedeadpunk | so all policy overrides and rootwrap is kind of broken till service restart | 14:56 |
noonedeadpunk | not 100% sure in solution yet... | 14:58 |
* recyclehero back from work | 15:09 | |
recyclehero | thats what happend from my instances working to some of them now booting in grub rescue: | 15:10 |
recyclehero | damaged raid array of the nfs share which holds instances volumes - copied volumes to external hdd- created new array- did some new filesystem tuning to the raid array(stripe-width)- copied back from the external hdd | 15:12 |
*** gokhani has quit IRC | 15:29 | |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_neutron master: DNM Change task ordering to perform smooth upgrades https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/788501 | 15:30 |
*** d34dh0r53 has quit IRC | 15:48 | |
*** d34dh0r53 has joined #openstack-ansible | 15:59 | |
recyclehero | I am seeing this repeat in cinder-volume container logs | 16:08 |
recyclehero | Apr 28 20:37:52 infra1-cinder-volumes-container-abe2b792 sudo[5307]: cinder : TTY=unknown ; PWD=/ ; USER=root ; COMMAND=/openstack/venvs/cinder-21.1.0/bin/cinder-rootwrap /etc/cinder/rootwrap.conf du --bytes -s /var/lib/cinder/mnt/39ae8d6d2110507a51bef6f9f4c6f5ab | 16:08 |
jrosser | recyclehero: that doesnt necessarily look bad - it's checking the free space? | 16:16 |
recyclehero | jrosser: so 39a...5ab is name openstack gave to my nfs share | 16:17 |
recyclehero | ? | 16:17 |
jrosser | i expect so | 16:17 |
jrosser | as a first guess thats the ID of the volume | 16:18 |
recyclehero | jrosser: its a dir and contains all the volumes | 16:19 |
jrosser | ok, cool :) | 16:19 |
jrosser | i feel i'm sightly missing the point here somewhere - the log line you pasted was checking the free space peridocally, i'm thinking that this is normal | 16:20 |
recyclehero | the big point is some of my valuable instances wont boot :(( | 16:22 |
recyclehero | I was able to create a volume in the nfs share | 16:22 |
jrosser | oh sorry, you asked about an cinder log line and i answered :/ | 16:22 |
*** d34dh0r53 has quit IRC | 16:22 | |
recyclehero | but when I wanted to launch an instance with that volume I got this | 16:23 |
recyclehero | Apr 28 20:50:44 infra1-cinder-volumes-container-abe2b792 cinder-volume[390]: 2021-04-28 20:50:44.486 390 WARNING py.warnings [req-63ace45e-e16f-4fb9-a11b-d999f2b0886e cc7d1a7f201f4b01b6a06e4923bd0805 ebe4a735e8d847f9ba37518db854fd1f - default default] /openstack/venvs/cinder-21.1.0/lib/python3.7/site-packages/sqlalchemy/orm/evaluator.py:99: SAWarning: Evaluating non-mapped column expression | 16:23 |
recyclehero | 'updated_at' onto ORM instances; this is a deprecated use case. Please make use of the actual mapped columns in ORM-evaluated UPDATE / DELETE expressions. | 16:23 |
recyclehero | "UPDATE / DELETE expressions." % clause | 16:23 |
recyclehero | is it okay to paste one log lines here? | 16:23 |
jrosser | paste.openstack.org really | 16:23 |
recyclehero | sure | 16:24 |
recyclehero | http://paste.openstack.org/show/804838/ | 16:27 |
recyclehero | i bet this is fimiliar to some of you. whats wrong with my volumes? | 16:27 |
*** d34dh0r53 has joined #openstack-ansible | 16:30 | |
jrosser | not sure really, but you might see if things look consistent between openstack server show <server_id> and openstack volume show <volume_id> | 16:35 |
jrosser | like if there are volumes aparrently attched to your server but those don't show up on the cinder side, or vice versa | 16:35 |
*** rpittau is now known as rpittau|afk | 16:39 | |
admin0 | so while redoing a controller from xenial -> bionic, setup hosts and infra went OK .. except ceph-mon is not able to join the cluster .. this is the error: https://gist.githubusercontent.com/a1git/1a3c883ec23bd9c2904333843ca7b80c/raw/2ef87a183bdcb0fe7639d7ccd2a063c54b0f54b2/gistfile1.txt .. anyone seen this before and knows of it ? | 16:49 |
admin0 | https://etherpad.opendev.org/p/osa-rocky-bionic-upgrade mentions of nova failing due to newer packages | 16:50 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_neutron master: Change task ordering to perform smooth upgrades https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/788501 | 17:00 |
*** gyee has joined #openstack-ansible | 17:39 | |
admin0 | 2021-04-28 17:23:21.687538 7f68c16bb700 -1 mon.c2-ceph-mon-container-79f39d53@0(probing) e0 handle_probe missing features, have 4611087853746454523, required 0, missing 0 is the error in the new container .. the only reference i could find was in a redhat and the solution said to use the same package for ceph | 17:40 |
admin0 | so right now, c1 and c3 are in xenial .. c2 in bionic | 17:40 |
*** andrewbonney has quit IRC | 18:00 | |
jrosser | admin0: have you done basic connectivity checks from the new to old mon? | 18:06 |
jrosser | "required 0" is very suspicious | 18:06 |
openstackgerrit | Merged openstack/openstack-ansible-os_neutron master: Updated from OpenStack Ansible Tests https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/786855 | 18:07 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_neutron master: Change task ordering to perform smooth upgrades https://review.opendev.org/c/openstack/openstack-ansible-os_neutron/+/788501 | 19:16 |
*** dave-mccowan has quit IRC | 20:22 | |
*** dave-mccowan has joined #openstack-ansible | 20:26 | |
*** spatel has quit IRC | 21:43 | |
*** rh-jelabarre has quit IRC | 21:45 | |
*** macz_ has quit IRC | 22:42 | |
*** tosky has quit IRC | 22:48 | |
*** luksky has quit IRC | 22:56 | |
*** kleini has quit IRC | 23:27 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!