*** DuncanT has quit IRC | 00:15 | |
*** DuncanT has joined #openstack-freezer | 00:16 | |
*** arunb has joined #openstack-freezer | 01:36 | |
*** EinstCrazy has quit IRC | 01:50 | |
*** arunb has quit IRC | 02:33 | |
*** EinstCrazy has joined #openstack-freezer | 03:51 | |
*** EinstCrazy has quit IRC | 03:58 | |
*** frescof_ has joined #openstack-freezer | 05:41 | |
*** DuncanT_ has joined #openstack-freezer | 05:42 | |
*** jokke__ has joined #openstack-freezer | 05:45 | |
*** DuncanT has quit IRC | 05:49 | |
*** frescof has quit IRC | 05:49 | |
*** jokke_ has quit IRC | 05:49 | |
*** DuncanT_ is now known as DuncanT | 05:51 | |
*** EinstCrazy has joined #openstack-freezer | 06:56 | |
*** EinstCrazy has quit IRC | 07:03 | |
*** jokke__ is now known as jokke_ | 07:56 | |
*** samuelBartel has joined #openstack-freezer | 08:45 | |
*** openstackgerrit has quit IRC | 10:02 | |
*** EinstCrazy has joined #openstack-freezer | 10:04 | |
*** daemontool_ has joined #openstack-freezer | 10:25 | |
*** daemontool has quit IRC | 10:28 | |
*** jokke_ has quit IRC | 10:34 | |
*** jokke_ has joined #openstack-freezer | 10:34 | |
*** reldan has joined #openstack-freezer | 11:00 | |
*** vannif has joined #openstack-freezer | 11:08 | |
*** reldan has quit IRC | 11:20 | |
*** reldan has joined #openstack-freezer | 11:29 | |
*** reldan has quit IRC | 11:46 | |
*** openstackgerrit_ has joined #openstack-freezer | 11:54 | |
*** openstackgerrit_ is now known as openstackgerrit | 11:55 | |
*** openstackgerrit has quit IRC | 11:59 | |
*** reldan has joined #openstack-freezer | 12:02 | |
*** openstackgerrit has joined #openstack-freezer | 12:08 | |
reldan | Hi guys, what is new? | 12:16 |
---|---|---|
*** vannif has quit IRC | 12:21 | |
*** EmilDi has joined #openstack-freezer | 13:10 | |
daemontool_ | reldan, hi | 14:00 |
daemontool_ | all good | 14:01 |
daemontool_ | nothing new | 14:01 |
daemontool_ | there was a critical bug | 14:01 |
daemontool_ | that prevented to execute restores from swift | 14:01 |
reldan | With swiftclient versino? | 14:01 |
daemontool_ | 2.7.0 | 14:01 |
daemontool_ | not is fixed | 14:01 |
reldan | Do we have something in review? | 14:01 |
reldan | Or probably we should fix something soon? | 14:02 |
daemontool_ | it's already fixed | 14:02 |
daemontool_ | all good | 14:02 |
daemontool_ | it took 1 day ans something | 14:02 |
daemontool_ | to troubleshoot and fix it | 14:02 |
daemontool_ | the fix was easy actually | 14:03 |
reldan | What is our priority now? | 14:03 |
daemontool_ | for the web ui | 14:03 |
daemontool_ | we need to do branching and pypi release | 14:03 |
daemontool_ | I think | 14:04 |
daemontool_ | we need to work on making better the volumes and vms backups | 14:04 |
daemontool_ | and the block based incremental | 14:04 |
daemontool_ | there's an ongoing conversation with SamYaple about that | 14:04 |
daemontool_ | mostly for VMs backup | 14:04 |
daemontool_ | I think we have to provide multiple options anyway as we always do | 14:05 |
daemontool_ | so probably we need anyway to use rsync based block incremental even for files | 14:05 |
daemontool_ | what we need to do now, is having the devstack plugin | 14:05 |
reldan | I see - how about tenant backup? Do we have a document? | 14:06 |
daemontool_ | fully working and have all our integration tests executing automatically | 14:06 |
daemontool_ | that is related with volume and vm backups too | 14:06 |
reldan | Agree | 14:06 |
daemontool_ | we have the bp | 14:06 |
daemontool_ | form last time | 14:06 |
daemontool_ | also EinstCrazy and another colleague of him are interested | 14:06 |
daemontool_ | on vm and volume backups | 14:06 |
daemontool_ | so if we agree with the bp for tenant based | 14:07 |
daemontool_ | we can start splitting tasks | 14:07 |
daemontool_ | and even work in paralle | 14:07 |
reldan | Very good! | 14:07 |
reldan | And what is about elasticsearch? Do we have some solution? | 14:07 |
daemontool_ | no solution | 14:07 |
daemontool_ | I mean | 14:07 |
daemontool_ | no alternative solution | 14:07 |
daemontool_ | we need to support also mysql as backup end I suppose | 14:08 |
daemontool_ | se we can use the service shared db | 14:08 |
reldan | We had two options - a plugin for es snapshots and distributed backup of every instance of es | 14:08 |
daemontool_ | do you mean es as backend db for the api? | 14:08 |
daemontool_ | ah ok | 14:09 |
daemontool_ | you are referring to es backup | 14:09 |
reldan | yes | 14:09 |
daemontool_ | I'd like if someone can test | 14:09 |
reldan | I remember that it was in our agenda | 14:09 |
daemontool_ | if the current lvm + job session backup works | 14:09 |
daemontool_ | yes | 14:09 |
daemontool_ | I think Slashme could test that | 14:10 |
daemontool_ | as he have some infrastructure available | 14:11 |
daemontool_ | reldan, when you can please have a conversation with SamYaple, he is proposing an alternative approach for block based vms backup | 14:11 |
reldan | I see, thank you. So nothing very urgent at the moment. But let ne know if we have | 14:11 |
reldan | Yes, sure | 14:12 |
daemontool_ | I think the integration with devstack | 14:12 |
daemontool_ | is quite urgent | 14:12 |
daemontool_ | there are at least 2 big companies that will be deploying in production freezer within the next 4 weeks | 14:12 |
daemontool_ | sooner we have the integrated test execute automatically by the dsvm gate job better is | 14:12 |
daemontool_ | reldan, I think we need to move forward with the rsync based incrementals also | 14:13 |
reldan | Who is doing integration with devstack right now? | 14:13 |
daemontool_ | me and vannif, but mostly me | 14:13 |
daemontool_ | I'll restart doing that as soon as the branching and release of the web ui are ok | 14:14 |
reldan | I see, do you know how to split it? | 14:14 |
daemontool_ | it shouldn't take more than couple of hours | 14:14 |
daemontool_ | split what? | 14:14 |
reldan | Split devstack integration. Or I can try check rsync? | 14:14 |
daemontool_ | I think, vannif can finishe the devstack integration as he has been working on that since the beginning | 14:16 |
daemontool_ | I think would be good if you start the tenant based backup | 14:16 |
reldan | I see, let me check then rsync | 14:16 |
reldan | I can | 14:16 |
reldan | yes, let me check blueprints then first | 14:16 |
daemontool_ | ok | 14:16 |
reldan | Thank you! | 14:16 |
daemontool_ | https://blueprints.launchpad.net/freezer/+spec/tenant-backup | 14:18 |
daemontool_ | also frescof_ has been working on an interesting feature for disaster recovery | 14:18 |
*** reldan has quit IRC | 14:20 | |
*** vannif has joined #openstack-freezer | 14:42 | |
*** daemontool has joined #openstack-freezer | 14:47 | |
*** daemontool_ has quit IRC | 14:47 | |
*** daemontool has quit IRC | 14:53 | |
*** daemontool has joined #openstack-freezer | 14:59 | |
*** daemontool_ has joined #openstack-freezer | 15:52 | |
*** daemontool has quit IRC | 15:54 | |
*** dschroeder has joined #openstack-freezer | 15:59 | |
*** dschroeder has quit IRC | 16:03 | |
*** dschroeder has joined #openstack-freezer | 16:04 | |
*** dschroeder has quit IRC | 16:04 | |
*** pp767df has joined #openstack-freezer | 16:12 | |
*** pp767df has quit IRC | 16:13 | |
*** reldan has joined #openstack-freezer | 16:13 | |
daemontool_ | please review https://review.openstack.org/#/c/270251/ and https://review.openstack.org/#/c/270315/ | 16:14 |
daemontool_ | vannif, can you review this please? https://review.openstack.org/#/c/266552/ | 16:14 |
*** Slashme has quit IRC | 16:18 | |
*** epheo has quit IRC | 16:19 | |
reldan | SamYaple: Hi, how are you! My name is Eldar! | 16:20 |
reldan | SamYaple: I was looking for any documentation for Ekko, but unfortunately link in readme is dead. Do you have any other link? | 16:21 |
*** EmilDi has quit IRC | 16:21 | |
daemontool_ | EinstCrazy, did you get the chance to take a look at https://blueprints.launchpad.net/freezer/+spec/tenant-backup ? | 16:23 |
daemontool_ | we should start working on that based on some agreement, if you are still interested | 16:24 |
*** dschroeder has joined #openstack-freezer | 16:31 | |
*** daemontool has joined #openstack-freezer | 16:42 | |
openstackgerrit | Merged openstack/freezer: Change Freezer repo_url from stackforge to openstack https://review.openstack.org/270251 | 16:43 |
*** daemontool_ has quit IRC | 16:44 | |
SamYaple | reldan: not as of yet. the ekko team is meeting feb 9th along side the kolla midcycle (because our teams overlap) and documentation is the top priority | 16:51 |
reldan | SamYaple: Thank you! I actually don't need full documentation, just 5-6 sentences with description, goal would be good start. | 16:53 |
SamYaple | reldan: that _should_ be there, if not ill add somethign quickly | 16:54 |
reldan | SamYaple: https://github.com/openstack/ekko | 16:55 |
SamYaple | reldan: thats the one | 16:56 |
reldan | SamYaple: Documentation link is dead for me. | 16:56 |
SamYaple | we also have #openstack-ekko | 16:56 |
SamYaple | yea we cant actually publish to docs.openstack.org yet | 16:56 |
SamYaple | that was from the cookie cutter repo file | 16:56 |
daemontool | ok | 16:56 |
daemontool | SamYaple, did you have a conversation with your team | 16:56 |
reldan | SamYaple: Thanks! | 16:57 |
daemontool | if you guys wants to go forward alone or converge? | 16:57 |
SamYaple | daemontool: that decision wont be made for some time daemontool | 16:57 |
SamYaple | well have alot more info after midcycle | 16:57 |
SamYaple | but i think what we need from freezer is that plugin architecture to even approach the issue | 16:57 |
reldan | SamYaple: Could you please give me a little bit more information what you mean by the plugin architecture. I was on vacation for a couple of weeks and seems that I missed something. | 16:59 |
*** EinstCrazy has quit IRC | 16:59 | |
SamYaple | reldan: daemontool had mentioned a plugin architecture for the freezer-agent that would have differetn types (database, file, ekko, etc) | 17:00 |
daemontool | SamYaple, honestly, I'm still trying to figure out if the solution your are proposing make sense or not, with all due respect | 17:01 |
daemontool | SamYaple, I'm not sure | 17:01 |
daemontool | how to solve the inconsistency between the nova db, and the changes in the hypervisor | 17:02 |
daemontool | that's important | 17:02 |
SamYaple | daemontool: there is no inconsistency at all | 17:02 |
daemontool | ok | 17:02 |
daemontool | so | 17:02 |
reldan | SamYaple: I see now. But if I understand it correct you have an additional agent per compute node. So it should provide some API, and freezer should be able to support plugin that connects to this API, is it right? | 17:02 |
SamYaple | reldan: well the idea was for ekko to reuse freezer-agent|scheduler but run it on the compute node | 17:03 |
SamYaple | so freezer-agent would have ekko plugin | 17:03 |
*** EinstCrazy has joined #openstack-freezer | 17:04 | |
reldan | SamYaple: I see. So from freezer you need 1) Remote code invocation 2) Scheduling 3) Status of remote invocation (success, fail). I am correct? | 17:05 |
reldan | 4) Plugin architecture for injecting code for work with hypervisor | 17:06 |
SamYaple | I think it will be more complicated than that truthfully reldan | 17:08 |
SamYaple | but i dont have the working code to show why it is | 17:09 |
SamYaple | for example to od what im doing i need a retention database (redis or similiar) so i can do retention on existing data | 17:09 |
SamYaple | The scheduling should probably be reusable. but i havent' actually looked at freezers scheuling code | 17:10 |
reldan | SamYaple: I see now. Thank you for the explanation. | 17:11 |
daemontool | SamYaple, what I was mentioning before, was something like: a user execute a backup of a VM in a region, then he wants to restore the same VM in another region | 17:12 |
daemontool | we try to follow this kind of approach with freezer | 17:12 |
daemontool | what struggles me, is that to use ekko we should find a way | 17:13 |
daemontool | to extract all the metadata from the nova db (preferably from the nova api) | 17:13 |
daemontool | include that data in the backup, along with the vm data | 17:13 |
daemontool | and store it in a media storage (i.e. local fs, ssh or swift the ones supported currently by freezer) | 17:14 |
daemontool | then when we want to restore | 17:14 |
daemontool | we can recreate the metadata in the nova db (preferibly using the nova api) | 17:14 |
daemontool | and restore the vm data interacting with the hypervisor | 17:14 |
daemontool | if we could do this, it would be a really cool feature | 17:15 |
daemontool | do you agree? | 17:15 |
SamYaple | daemontool: no. the "restore" in that sceanrio is just launching a new instance based on a glance image | 17:15 |
daemontool | ok | 17:16 |
daemontool | reldan, so to do that we need probably to add glance as one more media storage? | 17:17 |
daemontool | SamYaple, I'm asking this to understand what we need to do concretely to provide to you the plugin architecture | 17:18 |
daemontool | then you and your team can take the direction you want | 17:18 |
reldan | daemontool: yes probably we can implement it with the new plugin architecture. | 17:18 |
SamYaple | daemontool: im just as uncertain of what that means as well daemontool | 17:19 |
SamYaple | you mentioned plans for plugin architecture alraedy in works | 17:19 |
daemontool | so if you want to execute your backup-agent | 17:19 |
daemontool | from the scheduler | 17:19 |
daemontool | you can do it already | 17:19 |
daemontool | but | 17:19 |
daemontool | just to give you an example | 17:19 |
daemontool | we support multiple media storage | 17:19 |
daemontool | and also in parallel | 17:20 |
SamYaple | well storage is different. storage would never be in glance | 17:20 |
daemontool | so you can upload your backed up data simultaneously to a remote ssh node + swift+ local fs + another swift | 17:20 |
daemontool | glance in most of the cases will store in on Swift | 17:20 |
daemontool | but | 17:20 |
daemontool | I'd like to understand | 17:20 |
SamYaple | restore would reassemble the bits from storage and send it to glance (regardless of what glance is backed by) | 17:20 |
daemontool | if ekko can leverage that backup data upload to multiple media storage | 17:21 |
daemontool | in parallel | 17:21 |
daemontool | ok | 17:21 |
daemontool | which storage do you mean there? | 17:21 |
daemontool | swift? | 17:21 |
SamYaple | stroage should be objectstorage, s3/swift/radosgw but for small scale testing local filsystem storage is also an option | 17:22 |
daemontool | so what happen is swift or keystone are not available for some reason... we need to provide a solution that enable the user to restore the data is other services fails | 17:22 |
daemontool | SamYaple, ok | 17:22 |
SamYaple | daemontool: highly disagree | 17:22 |
SamYaple | nothing works when keystone is down | 17:22 |
SamYaple | data is gone when swift is gone | 17:22 |
daemontool | well that's the issue we also have to solve | 17:23 |
SamYaple | from a purely recovery standpoint, it is possible to reassemble all of the backups with ekko outside of the openstack environment | 17:23 |
SamYaple | but thats full loss of the openstack environtment | 17:23 |
daemontool | ok | 17:23 |
daemontool | let's say a user redeploy a new openstack instance on an alternative location | 17:24 |
daemontool | and wants to restore the data | 17:24 |
daemontool | we'd like to provide something that allow him&her to do that | 17:24 |
daemontool | so | 17:24 |
SamYaple | its possible to have a tool that can read all of the objects in obejct-storage and rebuild the ekko database, yes | 17:24 |
daemontool | for the VMs | 17:24 |
SamYaple | but its expesive to do so | 17:24 |
daemontool | ok | 17:24 |
SamYaple | lots of queries to objects | 17:24 |
daemontool | expensive is a different thing | 17:24 |
daemontool | so for VMs | 17:25 |
daemontool | when you restore | 17:25 |
daemontool | on another openstack deployment | 17:25 |
daemontool | you need also to restore the metadata of that vm, | 17:25 |
daemontool | do you agree with that? | 17:25 |
SamYaple | no | 17:25 |
SamYaple | that seems to be a non-openstack way of doing it | 17:25 |
SamYaple | and instance can only exist in one spot even (unique uuid) | 17:26 |
SamYaple | restoring it to another location is not in the spirit of openstack in my opinion | 17:26 |
daemontool | what? restoring a vm that belong to the same teant with the same network settings onto another openstack platform? | 17:26 |
*** samuelBartel has quit IRC | 17:26 | |
daemontool | that is a non-openstack way? :) | 17:26 |
daemontool | lol | 17:26 |
daemontool | but anyway | 17:27 |
daemontool | did you got the problem we solve? | 17:27 |
*** pbourke has joined #openstack-freezer | 17:27 | |
daemontool | at some degree we already offer that | 17:27 |
SamYaple | restoring an identical vm with the same metadata? yes | 17:27 |
SamYaple | at that point it could exist in two places at once | 17:27 |
SamYaple | yu can't share alot of things (depending on your definition of 'openstack platform") like floating ips | 17:28 |
SamYaple | you dont want to dup mac addresses | 17:28 |
daemontool | I've seen many environment using anycast for HA, but that's not what we are discussing here | 17:29 |
SamYaple | launching a _new_ instance based on a glance image you have restored, however, is very much an openstack way of doing it | 17:29 |
daemontool | SamYaple, I agree | 17:29 |
daemontool | so your answer after all this is, you do not need to backup any vm metadata | 17:29 |
daemontool | as the user just restore it from glance as image | 17:29 |
daemontool | right? | 17:29 |
daemontool | it's ok if it is that, I'm just trying to understand what's the solution you are proposing | 17:30 |
SamYaple | The instance uuid would be tracked in the ekko database _not_ in the actual backup manifest | 17:30 |
SamYaple | beyond the uuid, im not sure anything more is needed | 17:30 |
SamYaple | restoring back to a running instance from which the backup was _taken_ should be possible | 17:31 |
daemontool | ok, so do you have a plan, for instance, if you want to restore a vm and that vm was destroyed? | 17:31 |
SamYaple | but assuming that instance is fully gone the only plan currenlty is a glance image | 17:31 |
SamYaple | restore from new glance image | 17:31 |
daemontool | ok | 17:31 |
SamYaple | there is no garuntee you can get the same ips anyway | 17:31 |
SamYaple | neutron would have released those | 17:32 |
daemontool | I agree | 17:32 |
daemontool | yes | 17:32 |
SamYaple | glance image or cinder volume (so you can just mount it to an existing instance for more manual recovery) | 17:32 |
SamYaple | but even direct to cinder is needed since you can do glance-cinder | 17:32 |
SamYaple | glance->cinder | 17:32 |
SamYaple | direct to cinder isnt* needed | 17:33 |
daemontool | ok] | 17:34 |
*** reldan has quit IRC | 17:35 | |
daemontool | SamYaple, one sec when reldan will be back I'll ask you couple of more questions | 17:38 |
SamYaple | ok | 17:41 |
*** arunb has joined #openstack-freezer | 17:50 | |
daemontool | SamYaple, I think the day is finished for reldan | 17:56 |
daemontool | one question | 17:56 |
SamYaple | ok | 17:56 |
daemontool | how do you plan to upload the data to glance? | 17:56 |
daemontool | using the backup-agent on the compute node? | 17:56 |
SamYaple | no it would most likely be a seperate restore-type agent | 17:56 |
SamYaple | might combine it with the retention agent and call it something different | 17:57 |
daemontool | what would be the workflow like | 17:57 |
SamYaple | for restore? | 17:57 |
daemontool | yes | 17:58 |
daemontool | if you need to think about it more, np | 17:58 |
*** epheo has joined #openstack-freezer | 17:58 | |
SamYaple | api call to restore backupset + incremental (possibly name for restroed glance image), send info to restore agent, restore agent pulls down data and sends to glance | 17:58 |
*** Slashme has joined #openstack-freezer | 17:59 | |
SamYaple | pretty basic | 17:59 |
SamYaple | restore agent would decompress/unencrypt | 17:59 |
daemontool | the data will be retrieved from the object storage right? | 17:59 |
*** pbourke has quit IRC | 17:59 | |
*** pbourke has joined #openstack-freezer | 18:00 | |
SamYaple | or whatever backend (we have a local storage driver for testing) | 18:00 |
SamYaple | all the drivers for backend/compression/encryption are all stevedorized | 18:00 |
SamYaple | very very plugable | 18:00 |
daemontool | ok | 18:01 |
*** Slashme has quit IRC | 18:03 | |
daemontool | I don't think is that different from what we are doing today | 18:04 |
daemontool | the main difference is how the incrementals blocks are computed | 18:05 |
daemontool | but we do not have anyway the incremental for vms backups today | 18:05 |
daemontool | so the sentence "I don't think is that different from what we are doing today" is not entirely correct | 18:05 |
SamYaple | right but you guys dont have real retention. thats a massive thing that has to be in the architecure from the ground up | 18:06 |
SamYaple | thats a big worry for integration | 18:06 |
*** epheo has quit IRC | 18:06 | |
SamYaple | since, as it is right now, ill have to run my own retention agent and mechanisms | 18:06 |
daemontool | SamYaple, the thing is that you have to reach a compromise if you want to have a solution suitable for infrastructure backup | 18:07 |
daemontool | and baas | 18:07 |
daemontool | from my perspective | 18:07 |
*** Slashme has joined #openstack-freezer | 18:07 | |
daemontool | if we have a patch, that will not remove a backup incremental session within a specified time frame | 18:08 |
daemontool | we provide retention | 18:08 |
SamYaple | I disagree with you on that point | 18:09 |
*** epheo has joined #openstack-freezer | 18:09 | |
daemontool | and that was pretty much the requirement needed in every environment I've been working in the last 15 years | 18:09 |
daemontool | only for public cloud is debatable | 18:09 |
daemontool | but for vms backup | 18:09 |
daemontool | computing the incrementals from the hypervisor | 18:09 |
daemontool | is a better approach | 18:10 |
daemontool | I agree with you on that | 18:10 |
SamYaple | Without being able to slice the chain, I disagree with you having retention in this day and age | 18:10 |
SamYaple | at least for imaging | 18:11 |
daemontool | yes it's not space efficient, I agree with you | 18:11 |
SamYaple | forget space effiecnt | 18:12 |
SamYaple | it wont work with long chains | 18:12 |
daemontool | like what | 18:12 |
daemontool | it worked in production already | 18:12 |
SamYaple | no, no it hasnt | 18:12 |
daemontool | lol | 18:12 |
daemontool | ok | 18:13 |
daemontool | it hasnt | 18:13 |
SamYaple | storagecraft is arguably the largest block-based backup outthere (though veaam may have overtaken them) and they can't do more than 500 incrementals | 18:13 |
SamYaple | niether can veamm | 18:13 |
SamYaple | but file backup is very different than block backup due to the size of data | 18:13 |
*** epheo has quit IRC | 18:15 | |
*** Slashme has quit IRC | 18:16 | |
daemontool | ok, interesting conversation, I have to go now, please let's talk again on Thursday, also talk with reldan please, | 18:17 |
*** reldan has joined #openstack-freezer | 18:23 | |
*** zahari has joined #openstack-freezer | 18:26 | |
*** reldan has quit IRC | 20:23 | |
*** reldan has joined #openstack-freezer | 20:25 | |
*** reldan has quit IRC | 21:23 | |
*** zahari has quit IRC | 21:25 | |
*** reldan has joined #openstack-freezer | 21:32 | |
*** reldan has quit IRC | 21:54 | |
*** reldan has joined #openstack-freezer | 21:58 | |
*** daemontool has quit IRC | 22:39 | |
*** reldan has quit IRC | 23:33 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!