*** tosky has quit IRC | 00:01 | |
*** rfolco has joined #openstack-ansible | 00:38 | |
*** jamesdenton has joined #openstack-ansible | 01:02 | |
*** rfolco has quit IRC | 01:20 | |
*** kukacz has quit IRC | 01:24 | |
*** gyee has quit IRC | 01:24 | |
*** evrardjp has quit IRC | 01:24 | |
*** CeeMac has quit IRC | 01:24 | |
*** mmercer has quit IRC | 01:24 | |
*** tinwood has quit IRC | 01:24 | |
*** nicolasbock has quit IRC | 01:24 | |
*** sri_ has quit IRC | 01:24 | |
*** alanmeadows has quit IRC | 01:24 | |
*** Open10K8S has quit IRC | 01:24 | |
*** sc has quit IRC | 01:24 | |
*** jroll has quit IRC | 01:24 | |
*** kukacz has joined #openstack-ansible | 01:30 | |
*** gyee has joined #openstack-ansible | 01:30 | |
*** evrardjp has joined #openstack-ansible | 01:30 | |
*** CeeMac has joined #openstack-ansible | 01:30 | |
*** mmercer has joined #openstack-ansible | 01:30 | |
*** tinwood has joined #openstack-ansible | 01:30 | |
*** nicolasbock has joined #openstack-ansible | 01:30 | |
*** sri_ has joined #openstack-ansible | 01:30 | |
*** alanmeadows has joined #openstack-ansible | 01:30 | |
*** Open10K8S has joined #openstack-ansible | 01:30 | |
*** sc has joined #openstack-ansible | 01:30 | |
*** jroll has joined #openstack-ansible | 01:30 | |
*** dave-mccowan has joined #openstack-ansible | 01:49 | |
*** waxfire has quit IRC | 02:03 | |
*** waxfire has joined #openstack-ansible | 02:03 | |
*** macz_ has quit IRC | 02:05 | |
*** macz_ has joined #openstack-ansible | 02:05 | |
*** tinwood has quit IRC | 02:08 | |
*** tinwood has joined #openstack-ansible | 02:11 | |
*** gyee has quit IRC | 03:11 | |
*** klamath_atx has joined #openstack-ansible | 04:06 | |
*** klamath_atx has quit IRC | 04:12 | |
*** klamath_atx has joined #openstack-ansible | 04:13 | |
*** macz_ has quit IRC | 04:28 | |
*** kukacz has quit IRC | 05:10 | |
*** evrardjp has quit IRC | 05:33 | |
*** evrardjp has joined #openstack-ansible | 05:33 | |
*** kukacz has joined #openstack-ansible | 05:35 | |
*** shyamb has joined #openstack-ansible | 05:41 | |
*** dasp has quit IRC | 06:30 | |
*** dasp has joined #openstack-ansible | 06:47 | |
*** shyamb has quit IRC | 06:52 | |
openstackgerrit | Merged openstack/openstack-ansible-os_glance stable/ussuri: Trigger uwsgi restart https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/767144 | 07:52 |
---|---|---|
*** andrewbonney has joined #openstack-ansible | 08:11 | |
openstackgerrit | Merged openstack/openstack-ansible-os_glance stable/train: Trigger uwsgi restart https://review.opendev.org/c/openstack/openstack-ansible-os_glance/+/767145 | 08:14 |
openstackgerrit | Andrew Bonney proposed openstack/openstack-ansible master: Disable repeatedly failing zun tempest test https://review.opendev.org/c/openstack/openstack-ansible/+/767469 | 08:18 |
*** maharg101 has joined #openstack-ansible | 08:26 | |
*** maharg101 has quit IRC | 08:31 | |
*** jawad_axd has joined #openstack-ansible | 08:42 | |
jrosser | debian memcached<>keystone trouble again https://zuul.opendev.org/t/openstack/build/0869d255089f41dba7c8cef7ff8cd26c/log/logs/host/keystone-wsgi-public.service.journal-00-29-15.log.txt#20394-20450 | 08:43 |
openstackgerrit | Andrew Bonney proposed openstack/openstack-ansible-os_zun master: Update zun role to match current requirements https://review.opendev.org/c/openstack/openstack-ansible-os_zun/+/763141 | 08:44 |
noonedeadpunk | I can;t recall previous time tbh... | 08:45 |
*** jbadiapa has joined #openstack-ansible | 08:48 | |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/ussuri: Bump SHAs for stable/ussuri https://review.opendev.org/c/openstack/openstack-ansible/+/766860 | 08:49 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/train: Bump SHAs for stable/train https://review.opendev.org/c/openstack/openstack-ansible/+/766859 | 08:50 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible stable/ussuri: Bump SHAs for stable/ussuri https://review.opendev.org/c/openstack/openstack-ansible/+/766860 | 08:51 |
*** maharg101 has joined #openstack-ansible | 08:54 | |
*** tosky has joined #openstack-ansible | 09:12 | |
*** shyamb has joined #openstack-ansible | 09:17 | |
admin0 | morning | 09:25 |
pto | morning | 10:09 |
*** shyamb has quit IRC | 10:11 | |
pto | jrosser: Are you online? | 10:12 |
pto | It doesnt look like the part is beeing run https://github.com/openstack/openstack-ansible/blob/ba8b6af1740aa98ec930178a206f1ac248b026fc/playbooks/os-keystone-install.yml#L154 | 10:16 |
pto | The tasks_from in the role includes, seems to be discarded | 10:17 |
*** shyamb has joined #openstack-ansible | 10:18 | |
*** jpward has quit IRC | 10:20 | |
jrosser | pto: hello | 10:36 |
pto | jrosser: I am just testing the idp fix, and its not working | 10:36 |
jrosser | pto: can you paste an example? | 10:37 |
pto | jrosser: Sure. I will run again and pipe to a file. The tasks_from part is ignored and the os_keystone role is run again without the IDP part | 10:38 |
jrosser | unfortunately we have no means to test this in CI so i was waiting for feedback on the patch | 10:38 |
pto | jrosser: Thats why i am testing it :-) | 10:38 |
jrosser | oh maybe i see whats wrong | 10:39 |
jrosser | could you take the '.yml' off the end of the tasks_from line? | 10:40 |
pto | Sure. Testing now | 10:40 |
*** kukacz has quit IRC | 10:43 | |
*** shyamb has quit IRC | 10:44 | |
*** tosky has quit IRC | 10:47 | |
*** tosky has joined #openstack-ansible | 10:47 | |
openstackgerrit | Marcus Klein proposed openstack/openstack-ansible-os_octavia master: Do not set amp_ssh_access_allowed configuration option any more. https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/767511 | 10:47 |
kleini | jrosser, noonedeadpunk: ^^^ this is a small improvement to fix a configuration warning in Octavia. | 10:48 |
pto | jrosser: https://pastebin.com/DnEc86HR | 10:49 |
pto | jrosser: Tasks are still skipped | 10:49 |
jrosser | pto: sorry i have meetings for a while | 10:50 |
kleini | the other warning is about amp_image_id. when looking at the role, I would just remove octavia_amp_image_id and its amp_image_id part in the config template, but I am not sure, how best approach would be in this regards | 10:51 |
pto | jrosser: No worries. I will open a bug and propose a fix | 10:51 |
*** rpittau|afk is now known as rpittau | 10:59 | |
noonedeadpunk | kleini: and it should be probably backported as well back to train... | 11:00 |
kleini | noonedeadpunk: cherry pick in Gerrit shows conflicts. what is the better way to solve these conflicts? locally cherry-picking to train and ussuri and pushing new reviews or accepting the conflicts and solve them by downloading the review? | 11:07 |
kleini | noonedeadpunk: what do you think about https://docs.openstack.org/octavia/ussuri/configuration/configref.html#controller_worker.amp_image_id | 11:07 |
openstackgerrit | PerToft proposed openstack/openstack-ansible master: The current version does not include the os_keystone role correct, as it will run the role again, ignoring the tasks_from: main_keystone_federation_sp_idp_setup.yml part. This fix has been tested and now it corectly configures the SP/IDP config. Paste of a test: https://pastebin.com/DnEc86HR https://review.opendev.org/c/openstack/openstack-ansible/+/767513 | 11:08 |
*** kukacz has joined #openstack-ansible | 11:10 | |
openstackgerrit | Marcus Klein proposed openstack/openstack-ansible-os_octavia stable/ussuri: Do not set amp_ssh_access_allowed configuration option any more. https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/767494 | 11:20 |
openstackgerrit | Marcus Klein proposed openstack/openstack-ansible-os_octavia stable/train: Do not set amp_ssh_access_allowed configuration option any more. https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/767495 | 11:20 |
openstackgerrit | Marcus Klein proposed openstack/openstack-ansible-os_octavia stable/train: Do not set amp_ssh_access_allowed configuration option any more. https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/767495 | 11:29 |
openstackgerrit | Marcus Klein proposed openstack/openstack-ansible-os_octavia stable/ussuri: Do not set amp_ssh_access_allowed configuration option any more. https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/767494 | 11:30 |
*** rohit02 has joined #openstack-ansible | 11:33 | |
openstackgerrit | Marcus Klein proposed openstack/openstack-ansible-os_octavia master: Remove octavia_amp_image_id as it is deprecated. https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/767516 | 11:47 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible master: Fix keystone IDP setup https://review.opendev.org/c/openstack/openstack-ansible/+/767513 | 11:58 |
jrosser | pto: thanks for the patch & testing, i tidied up the commit message formatting ^ | 11:58 |
pto | Do you know how i can change the domain name? It looks not pretty with an uuid in horizon: https://pasteboard.co/JFkWqgo.png | 12:02 |
*** rfolco has joined #openstack-ansible | 12:16 | |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-os_keystone master: Add no_log to LDAP domain config https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/767525 | 12:25 |
pto | No, the trust_idp_list name appears (WAYF) | 12:39 |
*** macz_ has joined #openstack-ansible | 12:59 | |
*** macz_ has quit IRC | 13:04 | |
*** spatel has joined #openstack-ansible | 13:09 | |
noonedeadpunk | kleini: I'd drop amp_image_id with the same patch tbh | 13:10 |
noonedeadpunk | but it needs reno I guess | 13:10 |
noonedeadpunk | as in Victoria these options are not valid | 13:11 |
*** spatel has quit IRC | 13:15 | |
*** spatel has joined #openstack-ansible | 13:47 | |
kleini | noonedeadpunk: okay, so I add my second review to the first one, right? is that correct with the file for the release notes? | 14:01 |
noonedeadpunk | I think you can just amend first commit and do git review | 14:02 |
openstackgerrit | Marcus Klein proposed openstack/openstack-ansible-os_octavia master: Omit amp_ssh_access_allowed and remove amp_image_id options. https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/767511 | 14:12 |
kleini | noonedeadpunk: I know, how to work with Git and Gerrit. But I am to lazy to read all guides regarding documentation and so on. | 14:18 |
kleini | oh, there is a tool named reno for release notes | 14:22 |
kleini | I love my Manjaro providing python-reno from AUR | 14:23 |
spatel | Ubuntu question, I have bunch of these loop device on latest ubuntu 20.04 - /dev/loop1 72M 72M 0 100% /snap/lxd/16099 | 14:28 |
openstackgerrit | Marcus Klein proposed openstack/openstack-ansible-os_octavia master: Omit amp_ssh_access_allowed and remove amp_image_id options. https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/767511 | 14:28 |
spatel | what are these? | 14:28 |
spatel | do i need those LXC ? | 14:28 |
kleini | they are from snap package manager | 14:29 |
spatel | do you guys keep them in production openstack? | 14:29 |
kleini | lxd obviously is only provided in Ubuntu through a snap package | 14:29 |
kleini | I hate those loopback devices from snap package manager and uninstall snapd immediately if I see it somewhere | 14:30 |
spatel | but OSA also install LXC so do we need snap LXC? | 14:30 |
noonedeadpunk | ah, yes, sorry for not saying that it's just installing reno from pypi or whatever and running `reno new` | 14:30 |
spatel | I love to get rid of everything which i don't care and OSA don't care.. | 14:30 |
kleini | At least with 18.04 LXC is provided through normal packages. I don't know with 20.04. | 14:30 |
kleini | noonedeadpunk: already fixeds | 14:31 |
kleini | noonedeadpunk: already fixed | 14:31 |
spatel | I found OSA installed LXC + i already had LXC via snap | 14:31 |
spatel | I am going to remove snap but let me confirm with more folks if its safe and won't create issue in future deployment | 14:32 |
spatel | @jrosser | 14:32 |
noonedeadpunk | yep, thanks! | 14:32 |
kleini | jrosser, noonedeadpunk if you basically agree with https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/767511 I will backport that to my cherry-picks for T and U | 14:32 |
noonedeadpunk | let's leave T and U as is | 14:33 |
noonedeadpunk | we can't really backport removal of variables. well we can, but we should not do that | 14:33 |
noonedeadpunk | unless it's really required | 14:33 |
kleini | sure, didn't think about that | 14:34 |
jrosser | spatel: are you sure it is lxc and not lxd? | 14:34 |
spatel | oh wait.. so they are different? i thought lxc is client and lxd is daemon | 14:35 |
noonedeadpunk | have anybody tried out multiattach for cinder and ceph? | 14:35 |
jrosser | becasue we do nothing at all with snaps in OSA CI for focal, unless it's actually installing a snap sneakily when we install lxc | 14:35 |
jrosser | spatel: yes hey are totally different | 14:35 |
spatel | jrosser: check out - http://paste.openstack.org/show/801131/ | 14:35 |
jrosser | right but thats lxd | 14:36 |
spatel | you will see bunch of snap stuff, is it safe to remove snap? | 14:36 |
spatel | yes lxd (this is there since i install fresh OS) | 14:36 |
jrosser | lxd is a daemon and API and all sorts of shiny new stuff as a later on top of the same technology that makes lxc | 14:37 |
jrosser | *layer | 14:37 |
jrosser | you should be able to uninstall all of that | 14:37 |
spatel | perfect! that is what i was looking for. | 14:37 |
spatel | just wanted to make sure i remove something which ubuntu make angry | 14:38 |
jrosser | fwiw i use LXD a lot outside of OSA | 14:38 |
jrosser | it is very nice | 14:38 |
jrosser | but the snap installer is unfortunate, imho | 14:38 |
spatel | hmm, may be good for laptop not sure about production cloud | 14:39 |
kleini | noonedeadpunk: how is that supposed to work? two machines writing to the same physical block devices. this has to break, not? | 14:41 |
*** rohit02 has quit IRC | 14:43 | |
*** pto has quit IRC | 14:43 | |
noonedeadpunk | well you can mount it as ro eventually | 14:44 |
noonedeadpunk | as for proper shared filesytem manila should be used | 14:44 |
noonedeadpunk | but for simple cases (or when you place only config files there) it should be pretty ok | 14:44 |
noonedeadpunk | (I guess) | 14:44 |
kleini | i think, that depends on the filesystem | 14:44 |
noonedeadpunk | well, yes... | 14:45 |
admin0 | spatel, the whole of ubuntu is moving towards snap ( which is pushing me towards debian) :) | 14:54 |
admin0 | lxc and lxd are different .. ( we are still using the old lxc) for some reason though | 14:55 |
admin0 | noonedeadpunk, multiattach for ceph does not work | 14:55 |
admin0 | i have tired cinder with 2 different ceph backends | 14:55 |
admin0 | it works in cinder, you can create volumes on both ceph .. but it fails on nova | 14:55 |
noonedeadpunk | Well, I can't create in cinder even. and https://access.redhat.com/errata/RHBA-2020:2161seems related for me... | 14:56 |
admin0 | 404 page | 14:56 |
tbarron | kleini: noonedeadpunk with cinder multi-attach all responsibility for write arbitration is up to the user (vs shared filesystems), so | 14:56 |
admin0 | i have the code somewhere on cinder + dual ceph | 14:56 |
admin0 | osa + cinder + 2x ceph | 14:57 |
tbarron | kleini: noonedeadpunk you can put a node-local filesystem on there if you can ensure all consumers will be from that same node | 14:57 |
tbarron | kleini: noonedeadpunk you can put a clustered file system on there | 14:57 |
tbarron | kleini: noonedeadpunk or you can run an app that does its own write arbitration and works with raw block offsets instead of file system paths | 14:58 |
tbarron | kleini: noonedeadpunk but yeah, with conventional applications it's easy to get in trouble | 14:58 |
tbarron | kleini: noonedeadpunk even a "read only" local filesystem mount won't necessarily be actually read only to the block device | 14:59 |
noonedeadpunk | yeah, I totally understand that multiattach is not really good idea. But in istuation where I need to share just several config files that are not going to be changed almost never. and have no time for manila deployment :) | 15:01 |
tbarron | kleini: noonedeadpunk Clearly I'm not speaking atm to the nova/cinder implementation issues, just the architectural issues. | 15:01 |
tbarron | noonedeadpunk: kk | 15:02 |
* tbarron aims to get manila deployment so easy that "no time for manila" becomes a distant memory but knows we aren't there yet | 15:02 | |
admin0 | noonedeadpunk, here is my working code https://gist.github.com/a1git/6898a29d84008e0b01556e899249a87f | 15:03 |
admin0 | i created a 2nd ceph cluster .. added it as HDD .. was able to create volumes .. could not mount it .. nova reads only 1 ceph.conf and has no support/clue of the 2nd one | 15:03 |
admin0 | maybe my setup was wrong and you will have better results | 15:04 |
admin0 | sorry i forgot to remove the # in the end .. #rbd_user: hdd-cinder => rbd_user: hdd-cinder | 15:05 |
admin0 | multi nfs, multi iscsi, all work fine | 15:05 |
noonedeadpunk | tbarron: well it's not only about time for deployment, but when you need this in some public region, you face with issues that you need a billing there or not expose to customers, agreement for maintenance and so on :P | 15:05 |
admin0 | noonedeadpunk, use S3 for that | 15:06 |
admin0 | swift | 15:06 |
noonedeadpunk | I think I'm doing a bit different:) multiattach is the ability to mount the same rbd drive to several instances. While you here trying to achieve multicluster support which is not going to work | 15:07 |
admin0 | or setup a few volumes and gluster on top :) | 15:07 |
tbarron | noonedeadpunk: agree. FWIW I am thinking that capability of Manila with manila-csi to offer RWX storage to k8s on openstack may help drive public cloud readiness. | 15:07 |
noonedeadpunk | and you would need to separate nova to different AZ or aggregations | 15:07 |
noonedeadpunk | admin0: yep, I'm using s3 for multimedia but application does not support to store config in s3... | 15:09 |
noonedeadpunk | unless I missed the way to just mount bucket on host | 15:10 |
admin0 | it exists :) | 15:10 |
noonedeadpunk | oh, rly? | 15:10 |
admin0 | s3fs | 15:10 |
noonedeadpunk | s3fs-fuse o_O | 15:11 |
noonedeadpunk | admin0: ok, thank! | 15:11 |
noonedeadpunk | that is going to solve my issue | 15:11 |
*** poopcat has quit IRC | 15:26 | |
admin0 | going through trove documentation, i think its network requirement is the same as octavia | 15:33 |
admin0 | the trove images need access to dbaas containers | 15:33 |
admin0 | trove api i meant | 15:34 |
spatel | jamesdenton: around ? | 15:36 |
spatel | Do you care around dpdk support for centos8? we have couple of commit hanging around. | 15:37 |
jrosser | admin0: I think trove instances want to talk direct to rabbitmq 8-O | 15:37 |
admin0 | yeah .. i read that and was thinking how to wire the network | 15:37 |
spatel | jrosser: why trove need rabbitmq? | 15:38 |
*** macz_ has joined #openstack-ansible | 15:38 | |
jrosser | because that’s how it was designed | 15:38 |
admin0 | https://docs.openstack.org/openstack-ansible-os_trove/latest/configure-trove.html | 15:38 |
jrosser | and I find it a bit scary really | 15:39 |
spatel | trove should invoke nova/neutron api to get machine ready | 15:39 |
admin0 | "The trove guest VMs need connectivity back to the trove services via RPC (oslo.messaging) and the OpenStack services." | 15:39 |
jrosser | well yes, but that machine needs to be on the mgmt network and uses rabbit directly | 15:39 |
admin0 | so need to reserve a bit more ips for mgmt and use it inversely for trove :) | 15:40 |
admin0 | i want to do osa+trove next | 15:40 |
jrosser | there’s magnum/trove/Octavia/Manila all using service vm and all diffferent approaches | 15:40 |
jrosser | big headache | 15:40 |
spatel | hmm interesting - https://pt.slideshare.net/mirantis/trove-d-baa-s-28013400/7 | 15:42 |
jrosser | admin0: is you have a router/fw put trove on another subnet maybe with rules only to get to rabbit? | 15:42 |
*** macz_ has quit IRC | 15:42 | |
spatel | all RPC call | 15:42 |
admin0 | i always have a small vyos that holds .1 of the mgmt ranges, but is firewalled | 15:42 |
admin0 | was thinking of the same | 15:42 |
admin0 | i don't want to use flat network .. i am thinking to do it on a tagged vlan ( like octavia ) and then add .1 to the vyos and allow it to talk to the mgmt network | 15:44 |
admin0 | though if i allow it to talk to mgmt network, i do not see a need to add the vlan | 15:44 |
admin0 | i can just add a normal vlan network, add .1 in the vyos and allow it to talk to br-mgmt | 15:44 |
admin0 | i will give it a try .. | 15:44 |
admin0 | anyone did swift with radosgw as backend ? can care to pass me the configs ? | 15:45 |
admin0 | i need to have swift .. but reading the docs, rings generation etc looks complicated | 15:46 |
admin0 | if i add this as a regular ext-network, won't others be able to select it also ? | 15:48 |
*** poopcat has joined #openstack-ansible | 15:58 | |
noonedeadpunk | admin0: yeah it does | 15:59 |
noonedeadpunk | I think trove role is not really complete | 15:59 |
noonedeadpunk | I'm going to make trove installation early next year (I should have already started) and it feels that role will be heavily adjusted | 15:59 |
noonedeadpunk | as eventually trove vms need messaging as well, but giving access to mgmt network is bad solution... | 16:00 |
noonedeadpunk | so might be we would need another rabbit cluster specifically for trove that would serve on trove network, or we need to add trove network to rabbit containers | 16:01 |
*** macz_ has joined #openstack-ansible | 16:09 | |
admin0 | i am playing with it now .. let me see how far i can go | 16:12 |
admin0 | i am adding a vlan network .. ( not as external) and making the configs now | 16:13 |
admin0 | if i can get a instance to boot up, tcpdump will show what it wants to do | 16:13 |
spatel | noonedeadpunk: how is the Debian support for Debian buster ? | 16:17 |
spatel | Just debating between Ubuntu vs Debian :) | 16:18 |
spatel | If we don't have lots of users running debian with OSA then i don't want to be first person :) | 16:19 |
admin0 | whats wrong with being the first person :D ? | 16:19 |
admin0 | there always need to be a first person | 16:19 |
admin0 | :) | 16:20 |
admin0 | the docs say supported .. so i guess all CI passes | 16:20 |
spatel | noonedeadpunk: CI vs running in production experience (missing stuff breaking libs/drivers etc) | 16:21 |
spatel | dpdk support/sriov support (specially vendor driver support etc) | 16:21 |
jrosser | spatel: look at what you ceph packages situation would be for Debian | 16:25 |
spatel | Hmm! not sure what to look for but may try some google for latest Ceph support for Debian. | 16:27 |
spatel | I don't want last minute surprise that someone say hey Debian has no support for foo.. haha | 16:28 |
spatel | I know ubuntu is primary OS for all openstack development so not worried about ubuntu | 16:28 |
spatel | Look at this openstack survey - https://ibb.co/yBMKsMd | 16:32 |
spatel | Ubuntu is way to go :) | 16:38 |
*** pto has joined #openstack-ansible | 16:40 | |
*** stee has left #openstack-ansible | 16:42 | |
*** SecOpsNinja has joined #openstack-ansible | 16:44 | |
*** kukacz has quit IRC | 16:45 | |
*** pto has quit IRC | 16:45 | |
SecOpsNinja | hi. one quick question, how do you normaly export/view the logs do multiple systemd jobs in all lxc-containers? trying to open journalctl -xef in each container (in nova-api, placement ) and nova compute service to try to find the cause for the forever build state of the vm creation hasn't been a easy task (its an worse problem with debug logs...) | 17:04 |
openstackgerrit | Merged openstack/openstack-ansible-os_tempest stable/train: Switch tripleo job to content provider https://review.opendev.org/c/openstack/openstack-ansible-os_tempest/+/761021 | 17:05 |
spatel | SecOpsNinja: logs are located on /openstack/logs/<containers/ | 17:08 |
spatel | SecOpsNinja: sometime i use this command tail -f /openstack/log/*/*.log | 17:08 |
SecOpsNinja | spatel, yep i tought it wasn't including journal logs | 17:09 |
spatel | that is correct for that you can use syslog container | 17:10 |
spatel | all container ship journal to syslog container | 17:10 |
SecOpsNinja | or at elest is not easy to read... i dont undestand how syslog is worjking because i cant fiund the logs | 17:10 |
spatel | That is why ELK or some good centralization required to debug all logs | 17:10 |
admin0 | SecOpsNinja, i use graylog .. quick and easy to setup .. works nicely | 17:11 |
spatel | I hate journalctl personally (i missed old school syslog txt file) | 17:11 |
SecOpsNinja | how to dyou use it? you ship from rsyslog to it? | 17:11 |
spatel | In my cloud i used syslog to read all journalctl logs and ship to graylog | 17:12 |
SecOpsNinja | but the problem is that in rsyslog container i cand find where are the logs of all the services | 17:13 |
SecOpsNinja | because rsyslog doesnt expose any logs :D | 17:13 |
SecOpsNinja | in/openstack/logs.... | 17:13 |
admin0 | SecOpsNinja, will find my config and share | 17:13 |
SecOpsNinja | thanks | 17:13 |
admin0 | i have a store of configs :) | 17:13 |
*** kukacz has joined #openstack-ansible | 17:14 | |
admin0 | SecOpsNinja, https://gist.github.com/a1git/7232afe07f46474d5370113d609b9385 | 17:16 |
spatel | agreed we need OSA official doc somewhere to deal with logging issue, how to migrate from syslog to journactl way | 17:16 |
SecOpsNinja | is there that i need to do in osa default configuration in user_vvarivales file to force all container to redepoys there systemclgos to rsyslog? | 17:17 |
spatel | admin0: how do you shipping logs to greylog? using journalctl or legacy rsyslog | 17:17 |
admin0 | i dunno how this works .. i just do this and the logs magically appear in graylog :) | 17:17 |
spatel | admin0: haha | 17:17 |
admin0 | when it works, i move on to something else in the todo :) | 17:18 |
spatel | SecOpsNinja: i have this configuration in rsyslog http://paste.openstack.org/show/801134/ | 17:19 |
spatel | ship this logs to graylog | 17:19 |
SecOpsNinja | ok i will try to configurar that because there aren't much information regarding configureing certalize loggin... https://docs.openstack.org/openstack-ansible-rsyslog_server/latest/ | 17:20 |
SecOpsNinja | even in my rsylog container doesnt have /var/log/rsyslog so i dont know what is messing | 17:21 |
jrosser | the rsyslog stuff in OSA is really not useful | 17:21 |
jrosser | the journal is where the logs go in the most part | 17:21 |
ThiagoCMC | Hello! Does anyone has experience with Ansible's Dynamic Inventory (https://docs.ansible.com/ansible/latest/user_guide/intro_dynamic_inventory.html - that famous "openstack_inventory.py") for when the Inventory's source is, for example, a Project running on OpenStack (Heat or not)? I'm trying to understand how to aggregate my Instances (from a Heat Template, for example), into different "group of serv | 17:22 |
ThiagoCMC | ers in Ansible" but, I have no idea about how to do this. | 17:22 |
jrosser | if you arrange on the host yourself for the systemd journal to be sent to rsyslog then that will get you back syslog stuff | 17:22 |
SecOpsNinja | and there is a miss of stuff also there is something that arent going to journal but going to specific log file which doesnt help also | 17:22 |
spatel | jrosser: I have syslog container and i am seeing its not receiving any logs from anywhere. (shouldn't all container ship those to syslog container?) | 17:22 |
jrosser | that’s right | 17:22 |
jrosser | no because you will see that is disabled somewhere in group_vars and a link to the relevant systemd bug | 17:23 |
ThiagoCMC | When I run that `openstack_inventory.py --list`, I'm seeing things like `"server_groups": null` - not sure if it's related or not and, if yes, not sure how to set it from OpenStack side, so Ansible will detect it and build the "group_vars" accordingly. | 17:23 |
jrosser | everything is present to make systemd remote journal forwarding work but it is disabled by default | 17:23 |
ThiagoCMC | Sorry to hijack the conversation! =P | 17:24 |
spatel | jrosser: oh!! that make sense | 17:24 |
spatel | jrosser: so its known bug or something related to OSA ? | 17:24 |
jrosser | it is a bug in systemd, I’m not able to search this up right now but it has to be disabled | 17:25 |
SecOpsNinja | soi the rsyslog should be destroyed if its not doing anything. regading systemd journal forwarding how tdo that work? | 17:25 |
jrosser | it’s a log container | 17:25 |
jrosser | syslog|systemd journal remote | 17:25 |
spatel | jrosser: i think i have to find alternative for my new production to deal with this issue. | 17:26 |
jrosser | it was the target for both iirc | 17:26 |
SecOpsNinja | but yeh i was able to find that the problem is between nova api and compute nova service but cant find where its causing the problems in sheduling the vm because it seems cheduler is getitng the updates in each compute node. | 17:28 |
jrosser | SecOpsNinja: it’s worth looking a bit deeper at what openstack does with the systemd journal | 17:28 |
jrosser | because for example the request id is embedded in each log entry as a metadata field | 17:28 |
SecOpsNinja | jrosser, sorry i didnt understand that | 17:28 |
jrosser | there is lots more data stored than just the log text | 17:29 |
jrosser | there is extra context and fields inserted by the oslo | 17:29 |
SecOpsNinja | yep that is something that i wasnt able to find how to track a spefici req id and how it travels between all the compontentes | 17:29 |
jrosser | log layer | 17:29 |
jrosser | if you push it all into elk or similar you can make the req is first class data to query against | 17:30 |
SecOpsNinja | ok i will try to find that info becsue atm im complety lost how to resolve this if i cant find which component is causing the problm | 17:30 |
jrosser | rather than just being plain text to match against | 17:30 |
jrosser | you need some centralised logging really | 17:30 |
jrosser | and if you do it all via syslog somehow then that will throw away the hidden fields in the journal data which are really helpful | 17:31 |
SecOpsNinja | we have some loki experimental isntalation that i can try to use it... | 17:31 |
SecOpsNinja | now i need to see how to export to loki from all the xcontainers | 17:31 |
SecOpsNinja | because rsyslog container it doesnt seem to do anything atm... | 17:32 |
jrosser | you only needs to worry about the hosts not containers | 17:32 |
jrosser | because the journal files are mounted on the hosts from the containers | 17:33 |
*** jawad_axd has quit IRC | 17:33 | |
jrosser | so a good tool will be able to read all those journals if you give it the path | 17:33 |
*** jawad_axd has joined #openstack-ansible | 17:33 | |
jrosser | and kind of neatly your logging setup only then needs to know about your hosts, not the OSA inventory | 17:34 |
SecOpsNinja | hum.... using the /openstack/logs/*.system files in the host? | 17:34 |
jrosser | yes | 17:34 |
jrosser | there is an outstanding bug with ordering of the bind mounts bet there’s a patch for that | 17:34 |
*** jawad_axd has quit IRC | 17:35 | |
SecOpsNinja | thanks i wil try to find a way to configure this and see if i can check the metadata that you spoke off | 17:36 |
*** jawad_axd has joined #openstack-ansible | 17:36 | |
jrosser | might be “-o verbose” journalctl to see all the fields | 17:36 |
SecOpsNinja | ok thanks | 17:40 |
*** jawad_axd has quit IRC | 17:41 | |
spatel | jrosser: you are saying journal logs mounted on host:/openstack/log/ | 17:44 |
spatel | at present its disable but when we fixed that bug it will right? | 17:44 |
*** gyee has joined #openstack-ansible | 17:49 | |
jrosser | spatel: yes | 17:57 |
jrosser | and no | 17:57 |
jrosser | the jrounals are mounted on each host | 17:57 |
jrosser | forwarding off the host using systemd remote is disabled | 17:58 |
spatel | so in current scenario what is the alternative? use third-party tool to ship logs out? | 18:02 |
jrosser | right now openstack-ansible is not opinionated about how log collection is done | 18:06 |
jrosser | becasue everyone has their own preference | 18:06 |
jrosser | there is the example graylog and elk stuff in the openstack-ansible-ops repo | 18:07 |
mgariepy | graylog is nice. | 18:10 |
mgariepy | i havent tested elk much to have an opinon. | 18:10 |
spatel | mgariepy: i am also using graylog and its really nice | 18:16 |
spatel | mgariepy: i would like if you share your dashboard because i have shitty one.. :) | 18:16 |
spatel | do you guys parse logs at server end or client end? | 18:17 |
*** rpittau is now known as rpittau|afk | 18:20 | |
mgariepy | my dashboard is also shitty lol | 18:28 |
mgariepy | do you push the logs via fgelf? | 18:29 |
mgariepy | gelf? | 18:29 |
spatel | rsyslog to push logs | 18:31 |
spatel | this is all i have in dashboard - https://ibb.co/smjTvvw | 18:32 |
spatel | This will quickly tells me if anywhere ERROR happening in stack | 18:33 |
spatel | mgariepy: does gelf has support to ship journal? | 18:35 |
*** pto has joined #openstack-ansible | 18:42 | |
*** pto has quit IRC | 18:46 | |
mgariepy | spatel, https://github.com/openstack/openstack-ansible-ops/blob/master/graylog/graylog-forward-logs.yml | 18:57 |
mgariepy | yep. it does. | 18:57 |
spatel | sweet! so all i need to run that playbook right? | 18:57 |
admin0 | can someone point me to the 16-18 upgrade notes again .. i can't seen to find it | 19:00 |
spatel | why not 20.04 ? | 19:07 |
*** maharg101 has quit IRC | 19:10 | |
*** andrewbonney has quit IRC | 19:11 | |
mgariepy | admin0, https://etherpad.opendev.org/p/osa-rocky-bionic-upgrade https://etherpad.opendev.org/p/osa-newton-xenial-upgrade https://docs.openstack.org/openstack-ansible/rocky/admin/upgrades/distribution-upgrades.html | 19:11 |
mgariepy | the xenial there might be good also for some stuff. | 19:11 |
mgariepy | spatel, because rocky doesn't support 20.04. | 19:12 |
spatel | ah! | 19:12 |
admin0 | thanks mgariepy | 19:16 |
mgariepy | admin0, do you have ephemeral storagE? | 19:17 |
admin0 | yes | 19:17 |
admin0 | this one where i need to do the upgrade has ceph backend | 19:18 |
admin0 | does it make it easier, or harder ? | 19:18 |
mgariepy | https://etherpad.opendev.org/p/osa-rocky-bionic-upgrade line 86. | 19:18 |
mgariepy | if you have local ephemeral storage on the computes. be careful not to be bitten by this | 19:19 |
mgariepy | ;) | 19:19 |
mgariepy | i guess it's going to be eaiser if all ceph (or at least a lot faster) since you will need to migrate the vm from one compute to the other while ungrading. | 19:20 |
admin0 | i do not have ceph+ephemreal storage | 19:22 |
mgariepy | ok | 19:23 |
mgariepy | if it's all ceph it's going to be ok i guess; ) | 19:23 |
spatel | do you guys disable systemd-resolved ? | 19:35 |
admin0 | mgariepy, once i migrate the vms, i can just reinstall it fresh with 18 instead of doing an upgrade right ? | 20:00 |
admin0 | a compute node i mean | 20:00 |
mgariepy | if it's empty yep sure why not ? | 20:04 |
admin0 | so based on the docs, i identified that my c3 is primary .. i reformatted my c1 and c2 ( api uptime is not important) . and they are now on 18.04 .. now the docs says i need to disable c3 repo using .. echo “disable server repo_all-back/<infrahost>_repo_container-<hash>” | socat /var/run/haproxy.stat stdio | 20:26 |
admin0 | this i do not understand | 20:26 |
admin0 | or can i just delete this 3rd (16.04) repo and let it use the other 2 ? | 20:26 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Disable repeatedly failing zun tempest test https://review.opendev.org/c/openstack/openstack-ansible/+/767469 | 20:34 |
mgariepy | have fun during christmas guys, i'll be back in 2021. | 20:40 |
admin0 | :) | 20:58 |
admin0 | have a great holiday | 20:58 |
spatel | mgariepy: enjoy!! | 20:58 |
*** maharg101 has joined #openstack-ansible | 21:06 | |
*** rfolco has quit IRC | 21:09 | |
*** MickyMan77 has quit IRC | 21:10 | |
*** maharg101 has quit IRC | 21:10 | |
*** pto has joined #openstack-ansible | 21:27 | |
*** pto has joined #openstack-ansible | 21:28 | |
*** pto has quit IRC | 21:33 | |
*** MickyMan77 has joined #openstack-ansible | 21:40 | |
*** SecOpsNinja has left #openstack-ansible | 21:48 | |
*** pto has joined #openstack-ansible | 21:59 | |
*** pto has quit IRC | 22:04 | |
*** jbadiapa has quit IRC | 23:15 | |
*** tosky has quit IRC | 23:43 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!