*** Gimmix has quit IRC | 00:14 | |
*** klamath_atx has joined #openstack-ansible | 00:43 | |
*** cshen has joined #openstack-ansible | 01:00 | |
*** cshen has quit IRC | 01:04 | |
*** gshippey has quit IRC | 01:05 | |
*** jawad_axd has quit IRC | 01:19 | |
*** spatel has joined #openstack-ansible | 02:48 | |
*** lemko has quit IRC | 02:53 | |
*** lemko has joined #openstack-ansible | 02:53 | |
*** cshen has joined #openstack-ansible | 03:00 | |
*** cshen has quit IRC | 03:05 | |
*** klamath_atx has quit IRC | 04:38 | |
*** cshen has joined #openstack-ansible | 05:01 | |
*** cshen has quit IRC | 05:05 | |
*** klamath_atx has joined #openstack-ansible | 05:08 | |
*** klamath_atx has quit IRC | 05:18 | |
*** klamath_atx has joined #openstack-ansible | 05:19 | |
*** evrardjp has quit IRC | 05:33 | |
*** evrardjp has joined #openstack-ansible | 05:33 | |
*** gyee has quit IRC | 06:08 | |
*** pto has joined #openstack-ansible | 06:20 | |
*** pto_ has joined #openstack-ansible | 06:22 | |
*** cyberpear has quit IRC | 06:23 | |
*** pto has quit IRC | 06:24 | |
*** spatel has quit IRC | 06:30 | |
*** cshen has joined #openstack-ansible | 06:38 | |
*** cshen has quit IRC | 06:42 | |
*** miloa has joined #openstack-ansible | 06:42 | |
*** spatel has joined #openstack-ansible | 06:49 | |
*** rpittau|afk is now known as rpittau | 06:49 | |
*** SiavashSardari has joined #openstack-ansible | 06:53 | |
*** spatel has quit IRC | 06:53 | |
*** jbadiapa has quit IRC | 06:53 | |
*** jamesgibo has joined #openstack-ansible | 07:05 | |
*** jamesgibo has quit IRC | 07:21 | |
*** pcaruana has joined #openstack-ansible | 07:49 | |
*** pcaruana has quit IRC | 07:51 | |
*** pcaruana has joined #openstack-ansible | 07:51 | |
*** klamath_atx has quit IRC | 07:58 | |
*** klamath_atx has joined #openstack-ansible | 07:59 | |
*** jamesgibo has joined #openstack-ansible | 08:08 | |
*** jamesgibo has quit IRC | 08:08 | |
*** andrewbonney has joined #openstack-ansible | 08:16 | |
*** cshen has joined #openstack-ansible | 08:27 | |
*** tosky has joined #openstack-ansible | 08:45 | |
*** MickyMan77 has joined #openstack-ansible | 09:07 | |
*** shyamb has joined #openstack-ansible | 09:13 | |
openstackgerrit | Andrew Bonney proposed openstack/openstack-ansible-os_zun master: DNM: Update zun role to match current requirements https://review.opendev.org/763141 | 09:25 |
---|---|---|
pto_ | What are you guys doing to backup ceph block devices in your openstack clusters? | 09:28 |
SiavashSardari | there are couple of ways to tackle this issue, depends on how much you can spend on resources | 09:30 |
jrosser | we put a separate NFS/ZFS setup behind cinder-backup | 09:30 |
jrosser | also it depends what you want to mitigate | 09:30 |
*** shyamb has quit IRC | 09:30 | |
jrosser | user error -> i need my old version | 09:30 |
jrosser | ceph fail -> i need to put everything back | 09:31 |
*** kukacz has quit IRC | 09:31 | |
jrosser | you end up with different solutions depending what you decide the problem to solve is | 09:31 |
jrosser | as you can cinder-backup onto ceph itself | 09:31 |
jrosser | in a completely different ceph environment that's still using RBD but not openstack, we export RBD snapshots off of that with cron + scripts | 09:33 |
pto_ | Right now i am exploring my options. I am looking at the native openstack backup, backy2 and rclone to make incremental backups of block devices | 09:33 |
jrosser | thats really addressing disaster recovery rather than backup-as-a-service for users | 09:33 |
*** pto_ is now known as pto | 09:34 | |
pto | My client has two use cases, backup of ceph block devices and maybe a pool to s3 and specific openstack instances to s3 if possible | 09:34 |
kleini | Ceph should be pretty resilient against any data loss, right? If you Ceph cluster is large enough, I can not imagine any Ceph failure. | 09:35 |
SiavashSardari | from ceph point of view you can snap your images and use export import or you can use mirroring for backup only or you can setup full mirroring and use both ceph clusters as acrive-backup | 09:35 |
jrosser | kleini: we destroyed a ceph cluster entirely because the NTP to the mons broke | 09:36 |
pto | kleini: The cluster is on 7.2 pb right now in one DC. We plan to split it into two which will protect against e.g. fire | 09:36 |
jrosser | total loss of everything :( | 09:36 |
SiavashSardari | I too had an horror story with ceph that caused us data loss | 09:36 |
jrosser | many years ago we also had a total loss at ceph major upgrade, due to a bug | 09:36 |
pto | I am working for Aalborg University and they are archiving research data to ceph, and some data sets are very important and must be stored offsite due to seggregation og duties and compliance. Therefor I need to come up with a practical solution to copy rbd block deveis to s3 | 09:37 |
*** kukacz has joined #openstack-ansible | 09:38 | |
jrosser | pto: let me see if my colleague can join here who did our snapshot/export of RBD | 09:38 |
pto | jrosser: I am testing backy2 right now and it looks very nice, but its a manual task to setup each pool/rbd. I would be nice if the integrated openstack backup could be used too | 09:39 |
SiavashSardari | we tried snap export import but that needs another ceph cluster to import | 09:39 |
*** Carcer has joined #openstack-ansible | 09:39 | |
pto | Is the openstack backup to swift compliant with s3? | 09:40 |
jrosser | i would guess probably not as swift is a different API to s3 | 09:42 |
jrosser | what was slightly disappointing for me was cinder-backup does not seem to be able to do scheduled backups | 09:43 |
pto | jrosser: really? | 09:43 |
SiavashSardari | pto I know you said you need to backup block devices, but if you can handle backing up your data in your platform layer. that can solve your problem, I mean vm users upload their important data to s3 | 09:43 |
pto | I think backy2.com looks very promising, as it can very easy make a backup of rbd to disk or s3. It support schedules and sla reports | 09:43 |
SiavashSardari | if that is possible | 09:43 |
pto | SiavashSardari: Not really. The data is abstracted from them to a mount point in the archive solution. It could work for the OpenStack part | 09:44 |
pto | I am just a big fan of doing things automatic, E.g. when an instance is spawned, a backup could be automatical setup | 09:45 |
jrosser | so i think we handcrafted something a bit like backy2 which went RBD->NFS | 09:46 |
SiavashSardari | jrosser we are looking for a solution like that, could you please tell us more about your solution? | 09:47 |
pto | jrosser: Thanks. I think i will need to do something like this. I am just a little worried that i will get the right volumes | 09:47 |
SiavashSardari | pto I like automating stuff too, but sometimes that comes at a very high price. | 09:49 |
*** yolanda__ has joined #openstack-ansible | 09:50 | |
pto | SiavashSardari: I have tested backy2, and it look promising. Its easy to setup a backup job: backy2 backup rbd://production-volumes/volume1 my-backup1 | 09:50 |
SiavashSardari | we wanted to setup full mirroring in ceph but we do not have enough resources for that. so we are improvising to find a middle ground | 09:50 |
SiavashSardari | BWT thanks for introducing backy2, we didn't find that in our research. | 09:51 |
sep | SiavashSardari, i use backy2 to backup rbd image onto a different ceph cluster's cephfs | 09:52 |
Carcer | I wrote a backup script for rbds in an opennebula installation but it looks a lot like that backy2 does everything I did more elegantly | 09:52 |
SiavashSardari | I will look into that. but QQ, backy2 will backup all ceph images or you can pick some of images? | 09:53 |
sep | it backup the ones you tell it to. | 09:58 |
admin0 | quick question .. i have ceph+osa in one cluster .. in that cluster, if i want to add a 2nd backend for cinder, which is another ceph, how do i do it ? | 09:58 |
admin0 | has anyone added 2 diff ceph backends to cinders ? | 09:59 |
SiavashSardari | thanks guys, backy2 really looks promising | 09:59 |
pto | SiavashSardari: your welcome :-) | 10:00 |
*** shyamb has joined #openstack-ansible | 10:03 | |
*** yolanda__ is now known as yolanda | 11:00 | |
damiandabrowski | i also used benji before, it's also great tool based on backy2. Maybe worth a try: https://benji-backup.me/ | 11:00 |
*** SiavashSardari has quit IRC | 11:03 | |
*** pcaruana has quit IRC | 11:08 | |
*** spatel has joined #openstack-ansible | 11:31 | |
*** spatel has quit IRC | 11:36 | |
*** shyamb has quit IRC | 12:05 | |
openstackgerrit | Andrew Bonney proposed openstack/openstack-ansible-os_zun master: DNM: Update zun role to match current requirements https://review.opendev.org/763141 | 12:05 |
*** yasemind34 has joined #openstack-ansible | 13:06 | |
openstackgerrit | Andrew Bonney proposed openstack/openstack-ansible master: Add Zun CI requirement to Zuul required projects https://review.opendev.org/763177 | 13:22 |
openstackgerrit | Andrew Bonney proposed openstack/openstack-ansible-os_zun master: DNM: Update zun role to match current requirements https://review.opendev.org/763141 | 13:24 |
*** cyberpear has joined #openstack-ansible | 13:36 | |
*** simondodsley has joined #openstack-ansible | 13:58 | |
*** spatel has joined #openstack-ansible | 14:00 | |
*** cshen has quit IRC | 14:07 | |
*** macz_ has joined #openstack-ansible | 14:08 | |
*** macz_ has quit IRC | 14:12 | |
*** mbuil has quit IRC | 14:12 | |
openstackgerrit | James Denton proposed openstack/openstack-ansible-os_neutron master: Test OVS/OVN deployments on CentOS 8 https://review.opendev.org/762661 | 14:24 |
openstackgerrit | James Denton proposed openstack/openstack-ansible-os_neutron master: Test OVS/OVN deployments on CentOS 8 https://review.opendev.org/762661 | 14:26 |
spatel | jamesdenton: i have put my OVS+DPDK stuff on my blog for fun: https://satishdotpatel.github.io//openstack-ansible-add-compute-node-using-openvswitch-dpdk/ | 14:35 |
spatel | In my lab i have added each compute node using different neutron network deployment | 14:36 |
jamesdenton | nice! now for benchmarks | 14:36 |
spatel | Yes i need to install Trex and give it a try on each compute node and put some number here | 14:37 |
spatel | Does Trex is client server style tool ? I never used before but i will look at up on internet. | 14:38 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/ansible-role-systemd_service master: Add possibility to configure systemd sockets https://review.opendev.org/763194 | 14:40 |
jamesdenton | IIRC it's all generated by one node and traffic is routed thru the DUT | 14:40 |
jamesdenton | (device under test) | 14:40 |
jamesdenton | BTW i did verify the /usr/local/var/run/... path no longer used | 14:42 |
spatel | cool :) | 14:42 |
spatel | jamesdenton: lest that NFV patch get it out so we have working OVS deployment for CentOS | 14:43 |
jamesdenton | i keep finding things wrong :D | 14:44 |
spatel | jamesdenton: soon i want to make my hand dirty on OVN deployment (for CentOS 8) | 14:44 |
ThiagoCMC | spatel, I tried it before, got burned! :-P | 14:44 |
spatel | ThiagoCMC: i will just play in Lab not for production :) | 14:45 |
ThiagoCMC | Cool... lol | 14:45 |
ThiagoCMC | Here is what happened with me: https://bugs.launchpad.net/neutron/+bug/1689880 | 14:45 |
openstack | Launchpad bug 1689880 in neutron "[OVN] The "neutron_sync_mode = repair" option breaks the whole cloud!" [Undecided,New] | 14:45 |
ThiagoCMC | Gonna test it again someday | 14:45 |
spatel | i don't know what is the big advantage of OVN over current deployment? (SDN is good if you control your physical switch also using OVN) | 14:46 |
jamesdenton | noonedeadpunk is there a plan for suse support going forward? | 14:49 |
jamesdenton | just curious | 14:49 |
noonedeadpunk | jamesdenton: no, we've dropped support of it | 14:49 |
noonedeadpunk | but didn't do cleanup | 14:49 |
noonedeadpunk | well, did just partially | 14:50 |
jamesdenton | got it | 14:50 |
jamesdenton | i'll try and purge it when i come across it | 14:50 |
noonedeadpunk | +1 | 14:50 |
ThiagoCMC | Canonical messed it up? E: The repository 'http://security.ubuntu.com/ubuntu focal-security Release' does not have a Release file. | 14:56 |
ThiagoCMC | O_O | 14:56 |
*** miloa has quit IRC | 14:56 | |
openstackgerrit | Merged openstack/openstack-ansible-openstack_openrc master: Adding support of system scoped openrc and clouds.yaml https://review.opendev.org/762090 | 15:01 |
*** Miouge has quit IRC | 15:03 | |
*** miouge3625368681 has quit IRC | 15:03 | |
openstackgerrit | Merged openstack/openstack-ansible-galera_server master: Fix to mariadb backup script https://review.opendev.org/762990 | 15:06 |
noonedeadpunk | jrosser: I'm already thinking if we should just mask these sockets and use traditional conf file as we used to lol | 15:23 |
jrosser | to make a small change only? | 15:23 |
noonedeadpunk | yeah.. | 15:24 |
noonedeadpunk | as otherwise we kind of need to override provied by libvirtd package sockets and services... | 15:24 |
noonedeadpunk | and it feels it's aspplicable only for focal at the moment... | 15:25 |
noonedeadpunk | but yeah, probably we should move futher here.... | 15:26 |
spatel | random question, if i enable Two Factor authentication on my GitHub does that required me to try MFA code each time i do pull/push/commit ? | 15:54 |
spatel | s/try/type/ | 15:54 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/ansible-role-systemd_service master: Added bunch of systemd_*_targets variables https://review.opendev.org/763211 | 16:02 |
*** yasemind34 has quit IRC | 16:10 | |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/ansible-role-systemd_service master: Add possibility to configure systemd sockets https://review.opendev.org/763194 | 16:12 |
*** macz_ has joined #openstack-ansible | 16:13 | |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/ansible-role-systemd_service master: Add possibility to configure systemd sockets https://review.opendev.org/763194 | 16:24 |
openstackgerrit | Merged openstack/openstack-ansible-os_manila master: Add Manila key generation and distribution https://review.opendev.org/705019 | 16:26 |
*** cshen has joined #openstack-ansible | 16:49 | |
*** cshen has quit IRC | 16:53 | |
*** klamath_atx has quit IRC | 16:54 | |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_nova master: Use systemd sockets for libvirt https://review.opendev.org/763216 | 16:58 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_nova master: Use systemd sockets for libvirt https://review.opendev.org/763216 | 17:00 |
openstackgerrit | Dmitriy Rabotyagov (noonedeadpunk) proposed openstack/openstack-ansible-os_neutron master: Return calico to voting https://review.opendev.org/702657 | 17:11 |
openstackgerrit | James Denton proposed openstack/openstack-ansible-os_neutron master: Test OVS/OVN deployments on CentOS 8 https://review.opendev.org/762661 | 17:19 |
spatel | Do you guys sync all RabbitMQ queue in HA or just handful for performance? | 17:35 |
spatel | I am thinking to remove notification.* from HA group | 17:35 |
openstackgerrit | James Denton proposed openstack/openstack-ansible-os_neutron master: Test OVS/OVN deployments on CentOS 8 https://review.opendev.org/762661 | 17:53 |
*** cshen has joined #openstack-ansible | 17:55 | |
*** mmercer has joined #openstack-ansible | 17:56 | |
*** cshen has quit IRC | 17:59 | |
*** miloa has joined #openstack-ansible | 18:05 | |
*** miloa has quit IRC | 18:06 | |
nurdie | I just tried to create an image out of an ephemeral VM and now the disk seems to be gone? I can't even find it on Ceph. Anyone have any advice? | 18:14 |
*** cshen has joined #openstack-ansible | 18:21 | |
*** kleini has quit IRC | 18:30 | |
*** kleini has joined #openstack-ansible | 18:31 | |
*** cshen has quit IRC | 18:32 | |
-openstackstatus- NOTICE: The Gerrit service at review.opendev.org is being restarted quickly as a pre-upgrade sanity check, estimated downtime is less than 5 minutes. | 18:35 | |
*** kleini has quit IRC | 18:35 | |
*** kleini has joined #openstack-ansible | 18:36 | |
*** kleini has quit IRC | 18:37 | |
*** cshen has joined #openstack-ansible | 18:39 | |
*** kleini has joined #openstack-ansible | 18:39 | |
*** rpittau is now known as rpittau|afk | 18:39 | |
*** cshen has quit IRC | 18:45 | |
*** cshen has joined #openstack-ansible | 19:02 | |
*** kleini has quit IRC | 19:02 | |
*** openstackgerrit has quit IRC | 19:02 | |
*** andrewbonney has quit IRC | 19:04 | |
*** kleini has joined #openstack-ansible | 19:16 | |
*** kleini has quit IRC | 19:19 | |
*** cshen has quit IRC | 19:34 | |
*** kleini has joined #openstack-ansible | 19:42 | |
noonedeadpunk | jrosser: seems we have isues with parallel git clone | 19:56 |
noonedeadpunk | https://zuul.opendev.org/t/openstack/build/6fc2de8bf52242c4acc8aafa1759a021/log/job-output.txt#3393 | 19:57 |
noonedeadpunk | not sure why we run it in gates at all, but goot that we've catched | 19:58 |
*** cshen has joined #openstack-ansible | 19:59 | |
noonedeadpunk | "git reset --force --hard 7b7f20c636c12a02c0452c977e7952716fc9edcb\\n stderr: 'error: unknown option `force'" | 20:00 |
noonedeadpunk | I also not sure what is "--force" | 20:00 |
*** gyee has joined #openstack-ansible | 20:10 | |
ThiagoCMC | Quick question: I deployed a new OSA Usurri today with 2 haproxy_hosts but, the VIP is still at the first Controller. I was expecting that the IP to be only at one of the two HAProxy hosts, am I wrong? | 20:32 |
ThiagoCMC | The haproxy daemon was installed in all controllers anyway, is this expected? | 20:35 |
jrosser | that is not right | 20:50 |
jrosser | well, also neutron l3 node requires haproxy, so that would be the controllers | 20:51 |
jrosser | ThiagoCMC: is the haproxy service running on the controller? we may need to inhibit that | 20:53 |
*** openstackgerrit has joined #openstack-ansible | 20:53 | |
openstackgerrit | Merged openstack/openstack-ansible master: Unfreeze roles https://review.opendev.org/762185 | 20:53 |
ThiagoCMC | jrosser, my L3 is at the compute nodes, not at the controllers | 20:55 |
jrosser | hmm would be interesting to find out how haproxy got installed | 20:56 |
ThiagoCMC | Cool | 20:57 |
jrosser | you can cross reference the apt log with the ansible log | 20:57 |
ThiagoCMC | Right, I'll do another fresh install, with clean facts, just to make sure. And grab the logs right after haproxy playbook. | 20:58 |
jrosser | noonedeadpunk: i think that theres an error before the 'force' thing, git reset --hard 7b7f20c636c12a02c0452c977e7952716fc9edcb\n stderr: 'fatal: Could not parse object '7b7f20c636c12a02c0452c977e7952716fc9edcb' | 21:36 |
spatel | as soon as i added veth for lb-lbaas my controller nodes flooded with this packets - http://paste.openstack.org/show/800178/ | 21:52 |
spatel | any idea what is this IPv6 and where its coming from | 21:52 |
jrosser | you have some interface somewhere with ipv6 enabled | 21:59 |
jrosser | what you see there is just "how ipv6 works", it's the link local addresses (fe80::...) looking for something that is a router | 22:00 |
spatel | i do have ipv6 enabled but not using them | 22:00 |
spatel | as soon as i reboot all infra nodes flood stopped | 22:01 |
spatel | look like as soon as i added veth pair to connect br-lbaas with br-vlan created loop or something. (very odd) i do have same setup in other datacenter | 22:02 |
spatel | jrosser: do you think i should complete disable IPv6 ? | 22:03 |
jrosser | the rate there looks surprising | 22:04 |
spatel | it was bad flooding even i can't ssh to box and traffic was spike up on network over 100mbps | 22:05 |
jrosser | you can tell whats generating them becasue its eui-64 addresses | 22:06 |
jrosser | that will give you the mac addresses to figure out where it's coming from | 22:06 |
spatel | let me enable veth again and see if its spiral loop | 22:06 |
spatel | anyway i would like to disable IPv6 if i am not using it. | 22:07 |
spatel | i don't want it create any issue in future | 22:07 |
spatel | As soon as i enable veth on two infra node this flood start, look like its getting in loop | 22:18 |
spatel | those MAC address are part of new newly created v-br-lbaas pair | 22:21 |
spatel | This is the script i am running to add veth - http://paste.openstack.org/show/800180/ | 22:22 |
ThiagoCMC | jrosser, I just tried again a fresh deployment, the HAProxy and Keepalived are being installed in all controllers, despite the fact that the haproxy_hosts is configured and points to different machines. Any tips to where are the logs that I need to take a look? | 22:23 |
spatel | same script running fine on centos7 with ipv6 enabled | 22:23 |
spatel | look like centOS8 issue | 22:23 |
jrosser | ThiagoCMC: you could run manually the haproxy install playbook with --list-hosts to see what it thinks it is targetting | 22:23 |
jrosser | if the controllers are listed then its an inventoy / openstack_user_config problem | 22:24 |
ThiagoCMC | Yep, the controllers are listed! | 22:24 |
jrosser | i have a deployment seperate haproxy nodes so can compare notes tomorrow if you can't find why | 22:24 |
*** rh-jlabarre has joined #openstack-ansible | 22:25 | |
ThiagoCMC | And this is a fresh install (from scratch, no previous Inventory/facts under /etc/openstack_deploy) | 22:25 |
jrosser | that means that the controllers are in the haproxy ansible group https://github.com/openstack/openstack-ansible/blob/master/playbooks/haproxy-install.yml#L23 | 22:25 |
jrosser | so inventory_manage blah blah and try to find why that is | 22:26 |
*** CeeMac has quit IRC | 22:26 | |
*** johnsom has quit IRC | 22:27 | |
*** tinwood_ has joined #openstack-ansible | 22:28 | |
*** rh-jelabarre has quit IRC | 22:28 | |
*** tinwood has quit IRC | 22:28 | |
ThiagoCMC | Hmm... Ok... | 22:28 |
*** johnsom has joined #openstack-ansible | 22:29 | |
*** CeeMac has joined #openstack-ansible | 22:30 | |
spatel | jrosser: may be this is what going on - https://bugs.launchpad.net/neutron/+bug/1575225 | 22:32 |
openstack | spatel: Error: Could not gather data from Launchpad for bug #1575225 (https://launchpad.net/bugs/1575225). The error has been logged | 22:32 |
ThiagoCMC | jrosser, found the problem, it was my /etc/openstack_deploy subdir with more files declaring it. LOL - Sorry about the buzz! | 22:37 |
admin0 | :) | 22:37 |
ThiagoCMC | :-P | 22:38 |
ThiagoCMC | Interesting... I don't know how to remove hosts from a deployed environment... Just gonna start from scratch, again. lol | 22:38 |
jrosser | scripts/inventory_manage | 22:39 |
admin0 | use inventory manage -r to delete from inventory, cleanup the yaml and delete the containers | 22:39 |
ThiagoCMC | Amazing! Thanks! | 22:39 |
*** cshen has quit IRC | 22:49 | |
spatel | jrosser: i think i found my issue, let me try that and see if that fix flooding. | 23:03 |
spatel | problem solved, my bad i created loop with veth pair and didn't realized that.. :) | 23:07 |
spatel | Do you default keep IPv6 enabled on all servers? | 23:08 |
*** spatel has quit IRC | 23:18 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!