*** dschroeder has quit IRC | 00:18 | |
*** EinstCrazy has joined #openstack-freezer | 01:14 | |
*** c00281451 has quit IRC | 01:27 | |
*** EinstCrazy has quit IRC | 01:57 | |
*** EinstCrazy has joined #openstack-freezer | 01:59 | |
*** reldan has quit IRC | 02:13 | |
*** EinstCra_ has joined #openstack-freezer | 02:13 | |
*** EinstCrazy has quit IRC | 02:16 | |
*** szaher__ has joined #openstack-freezer | 02:52 | |
*** szaher_ has quit IRC | 02:53 | |
*** daemontool has quit IRC | 03:30 | |
*** EinstCra_ has quit IRC | 04:13 | |
*** EinstCrazy has joined #openstack-freezer | 04:13 | |
*** EinstCrazy has quit IRC | 04:57 | |
*** EinstCrazy has joined #openstack-freezer | 05:01 | |
*** daemontool has joined #openstack-freezer | 07:14 | |
*** daemontool has quit IRC | 07:22 | |
*** ig0r_ has joined #openstack-freezer | 09:17 | |
*** daemontool has joined #openstack-freezer | 09:23 | |
daemontool | Morning | 09:34 |
---|---|---|
daemontool | vannif, ping | 09:34 |
*** ig0r_ has quit IRC | 09:41 | |
vannif | lo | 10:02 |
daemontool | vannif, do you have 10 min to see together | 10:06 |
daemontool | this? https://review.openstack.org/#/c/260950/ | 10:06 |
daemontool | I'd like to solve that asap | 10:06 |
daemontool | if possible | 10:06 |
vannif | yes. just opened it | 10:06 |
daemontool | ok ty | 10:07 |
*** reldan has joined #openstack-freezer | 10:07 | |
daemontool | the issue there is that the module freezer_api is never loaded | 10:07 |
daemontool | reldan, morning | 10:08 |
*** EinstCrazy has quit IRC | 10:10 | |
reldan | daemontool: hi | 10:18 |
reldan | daemontool: do we have a problem? | 10:18 |
reldan | something wrong after my fixes? | 10:18 |
*** samuelBartel has joined #openstack-freezer | 10:24 | |
daemontool | reldan, why_ | 10:24 |
daemontool | ? | 10:24 |
reldan | daemontool: I don’t know ) You just say me good morning, and I thought that probably something is wrong with my commit ) | 10:25 |
reldan | And we have something critical to fix ) | 10:25 |
vannif | excusatio non petita, accusatio manifesta | 10:29 |
daemontool | reldan, mmhhh... did you do anything that we don't know? | 10:39 |
daemontool | lol | 10:39 |
daemontool | =) | 10:39 |
reldan | daemontool :) Nope I just saw these messages | 10:39 |
daemontool | vannif, any clue about testr on the freezer-api_ | 10:39 |
reldan | daemontool: the issue there is that the module freezer_api is never loaded | 10:40 |
reldan | [10:08am] daemontool: reldan, morning | 10:40 |
daemontool | reldan, yes | 10:40 |
reldan | And I supposed that it may be related :) | 10:40 |
daemontool | that's the issue | 10:40 |
daemontool | yes | 10:40 |
reldan | Do we have some log or we should add tests to load this module? | 10:40 |
daemontool | reldan, the tests loads that module | 10:49 |
daemontool | the only log we have it this one http://logs.openstack.org/50/260950/2/check/gate-freezer-api-python27/824e2ce/console.html | 10:49 |
daemontool | reldan, to what would you like to work next? | 10:50 |
reldan | I have two days before my holiday. I would like to improve logging today for parallel backup and add some documentation | 10:51 |
daemontool | ah ok | 10:51 |
daemontool | yes | 10:51 |
reldan | But if we have some bugs, I’m ready to take one ) | 10:51 |
daemontool | testing of parallel | 10:51 |
daemontool | and logging | 10:51 |
daemontool | ++ | 10:51 |
reldan | Deal :) | 10:51 |
daemontool | reldan, I feel we need to improve the nova and cinder backups in some way | 10:57 |
daemontool | we should have a bp to review, if I remember well? | 10:58 |
reldan | daemontool: I agree. We need requirements and architecture document | 10:58 |
daemontool | yes | 10:58 |
daemontool | or we do not have the bp yet? | 10:58 |
daemontool | I remember you had some clear idea/options about it | 10:58 |
daemontool | frescof too had some idea | 10:58 |
daemontool | let's talk about it to the meeting today | 10:59 |
reldan | I don’t know, I didn’t write it. The biggest problem - we need requirements and some agreement that we are going to do it that way | 10:59 |
reldan | Because now it may be implemented by cindernative, may be implemented not cinder native | 10:59 |
reldan | Some ideas with VM with freezer attached to volumes | 11:00 |
daemontool | m3m0, to the topics for the meeting today please add: elasticsearch backup (from Deklan), bp for nova&cinder backup, python-frezerclient, backup/restore listing using the scheduler (the code will be ported to the freezerclient) | 11:00 |
daemontool | reldan, I think we should support both | 11:00 |
daemontool | It's a bad answer I know | 11:01 |
reldan | In this case for tenant backup - we also should support both and for multi-region backup | 11:01 |
daemontool | but by supporting both we offer flexibility | 11:01 |
daemontool | yes | 11:01 |
reldan | So we will have two tenant backups and two multi-region backup | 11:01 |
daemontool | mmhhh | 11:02 |
daemontool | frescof, do you have any comment? | 11:02 |
m3m0 | daemontool, noted, https://etherpad.openstack.org/p/freezer_meetings | 11:03 |
daemontool | m3m0, thanks | 11:03 |
*** EinstCrazy has joined #openstack-freezer | 11:33 | |
daemontool | https://review.openstack.org/#/c/267485/ | 11:40 |
*** reldan has quit IRC | 11:47 | |
*** reldan has joined #openstack-freezer | 12:11 | |
openstackgerrit | Fausto Marzi proposed openstack/freezer-web-ui: Align requirements to liberty global-requirements https://review.openstack.org/246981 | 12:52 |
openstackgerrit | Memo Garcia proposed openstack/freezer-web-ui: Fix for sessions that point to non-existing urls https://review.openstack.org/267578 | 13:53 |
openstackgerrit | Memo Garcia proposed openstack/freezer: Merge vssadmin argument with snapshot https://review.openstack.org/267595 | 14:28 |
*** pennerc has joined #openstack-freezer | 14:29 | |
m3m0 | reldan, vannif https://review.openstack.org/267595 | 14:29 |
m3m0 | daemontool ^^ | 14:29 |
reldan | m3m0: +2 but let’s wait for tests | 14:30 |
m3m0 | so far nothing is broken locally but let's wait :) | 14:31 |
reldan | m3m0: You know, probably we can completely remove vssadmin | 14:32 |
reldan | if is_windows(): | 14:32 |
reldan | # vssadmin is to be deprecated in favor of the --snapshot flag | 14:32 |
reldan | if backup_opt_dict.snapshot: | 14:32 |
reldan | backup_opt_dict.vssadmin = True | 14:32 |
reldan | m3m0: freezer/backup.py 216 | 14:32 |
m3m0 | wait wait, so should I leave vssadmin but with the deprecation flag? | 14:33 |
m3m0 | and start the movement to snapshot? | 14:33 |
reldan | m3m0: I don’t know. I just see that vsadmin is alway true when we have is_windows and snapshot | 14:33 |
reldan | so if you are removing it, probably you can remove it from backup.py as well | 14:34 |
m3m0 | yep, that's the case otherwise by default | 14:34 |
reldan | m3m0: | 14:37 |
reldan | https://gist.github.com/Reldan/c37d2a53545fce54ee1a | 14:37 |
m3m0 | yep i did the same | 14:37 |
m3m0 | I'm pushing :) | 14:37 |
reldan | Great! | 14:37 |
m3m0 | thanks :) | 14:37 |
vannif | yes. I too think that the snapshotting cod should look for the --snapshot flag, be it vss, lvm, or whatever (btrfs ?) | 14:39 |
openstackgerrit | Memo Garcia proposed openstack/freezer: Merge vssadmin argument with snapshot https://review.openstack.org/267595 | 14:39 |
m3m0 | done | 14:40 |
reldan | +2 | 14:40 |
vannif | do we want to keep the --vssadmin flag as deprecated ? | 14:41 |
m3m0 | really I don't think so | 14:43 |
openstackgerrit | Memo Garcia proposed openstack/freezer-web-ui: Simplify snapshot configuration for actions https://review.openstack.org/267617 | 14:48 |
daemontool | sorry I was on a meeting | 15:27 |
daemontool | catching up | 15:27 |
daemontool | vannif, tests are not discovered | 15:37 |
daemontool | that's the issue | 15:38 |
*** EinstCrazy has quit IRC | 15:38 | |
*** emildi has quit IRC | 15:40 | |
*** emildi has joined #openstack-freezer | 15:52 | |
*** dschroeder has joined #openstack-freezer | 15:54 | |
*** ddieterly has joined #openstack-freezer | 15:56 | |
m3m0 | #startmeeting openstack-freezer 14-01-2016 | 16:01 |
openstack | Meeting started Thu Jan 14 16:01:09 2016 UTC and is due to finish in 60 minutes. The chair is m3m0. Information about MeetBot at http://wiki.debian.org/MeetBot. | 16:01 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 16:01 |
openstack | The meeting name has been set to 'openstack_freezer_14_01_2016' | 16:01 |
m3m0 | All: meetings notes available in real time at: https://etherpad.openstack.org/p/freezer_meetings | 16:01 |
m3m0 | hey guys ready to rumble? | 16:01 |
ddieterly | yes | 16:01 |
m3m0 | who is here today? please raise your hand | 16:01 |
m3m0 | o/ | 16:01 |
ddieterly | o/ | 16:01 |
reldan | o/ | 16:01 |
m3m0 | ok let's start | 16:02 |
m3m0 | #topic elasticsearch | 16:03 |
m3m0 | we need to create a new mode in freezer to backup and restore elasticsearch | 16:03 |
m3m0 | has anyone look at it? | 16:03 |
ddieterly | i looked at es this morning | 16:03 |
ddieterly | so, the req is to be able to backup /var/log, audit logs (whatever that means), and es | 16:04 |
m3m0 | in case of cluster, do we need to backup only the master one? | 16:04 |
ddieterly | i think /var/log and audit logs can already be backed up in freezer thru config | 16:04 |
ddieterly | for es, we will need to mount a shared volume and snapshot es to that shared volume and then back the snapshot up | 16:05 |
m3m0 | if that the case then no new mode is required | 16:05 |
m3m0 | why a shared volume? | 16:05 |
ddieterly | the alternative is to backup each snapshot on each node of the cluster (i think) | 16:06 |
m3m0 | is it necesary to backup each node? | 16:07 |
m3m0 | by the way do you want to take ownership of this ddieterly? | 16:08 |
ddieterly | i think we would technically need to backup each shard | 16:08 |
ddieterly | to get a logically consistent view of the entire db, i seems easiest to snapshot to a shared repo on a single volume and that up | 16:08 |
ddieterly | https://www.elastic.co/guide/en/elasticsearch/reference/current/modules-snapshots.html | 16:08 |
ddieterly | "shared file system repository" seems to be the most straight forward way to do it | 16:10 |
*** emildi has quit IRC | 16:10 | |
m3m0 | could be a great idea to create a repository plugin for openstack | 16:10 |
ddieterly | but, i'm not an expert | 16:10 |
reldan | I think that probbly it may be better to just add plugin for swift | 16:11 |
reldan | yes | 16:11 |
reldan | Something like that https://github.com/elastic/elasticsearch-cloud-aws#s3-repository only for swift | 16:11 |
m3m0 | All: meetings notes available in real time at: https://etherpad.openstack.org/p/freezer_meetings | 16:12 |
ddieterly | so, a plugin for es that stores to swift? | 16:12 |
reldan | https://github.com/wikimedia/search-repository-swift | 16:12 |
reldan | Yes | 16:12 |
ddieterly | that's probably what tsv was talking about in the email thread | 16:12 |
reldan | It seems that wikimedia already has a swift plugin | 16:12 |
m3m0 | but reldan, does that break the swift, ssh, local storage functionality? | 16:13 |
reldan | In that case we just don’t need freezer to do a backup | 16:13 |
reldan | es will store all data in swift by itself | 16:13 |
m3m0 | but in the case we want ssh? | 16:14 |
ddieterly | we would need to schedule and initiate the backup, right? | 16:14 |
m3m0 | should we use 2 approach for this? | 16:14 |
reldan | yes, sure we can integrate it with scheduler | 16:15 |
reldan | PUT /_snapshot/my_backup/snapshot_1?wait_for_completion=true | 16:15 |
reldan | to execute something like that | 16:15 |
reldan | Otherwise we will use 1) ElasticSearch backup to save data on disk 2) Use Freezer to store backup on Swift | 16:16 |
m3m0 | ok, so first step is to create a bp and/or spec to review this | 16:16 |
m3m0 | ddieterly what do you think? | 16:17 |
ddieterly | so, is the first step to investigate the options: 1) plugin or 2) just use freezer? | 16:17 |
m3m0 | yes and create a spec | 16:17 |
reldan | Agree | 16:18 |
ddieterly | ok | 16:18 |
ddieterly | so, the first step is investigation? | 16:18 |
m3m0 | yes | 16:18 |
ddieterly | ok | 16:19 |
ddieterly | i'm assuming that pierre can do the config in hlm to backup /var/log and the audit logs? | 16:20 |
m3m0 | we need to create the configuration file and Slashme can deploy it | 16:20 |
m3m0 | and of course we need to test it in a similar environment | 16:21 |
ddieterly | do we need to address the other questions that are in the blueprint? | 16:22 |
ddieterly | https://blueprints.launchpad.net/freezer/+spec/backup-centralized-logging-data | 16:22 |
m3m0 | yes, please feel free | 16:22 |
ddieterly | what i mean, do any of the topics need to be addressed at this time | 16:23 |
ddieterly | so, i'm assuming that you all are very busy, and the last thing you need is more work | 16:24 |
ddieterly | so, it looks like i'll be investigating the plugin? | 16:24 |
m3m0 | aaaa yes we have 4 more topics | 16:25 |
reldan | Yes and probably they have special requirements about incremental backups, encryption | 16:25 |
daemontool | I'm here sorry | 16:25 |
m3m0 | so regarding elasticsearch are we clear in the next step | 16:25 |
m3m0 | ? | 16:25 |
reldan | I don’t know - can we add encryption to plugin | 16:25 |
reldan | yes | 16:26 |
ddieterly | investigate plugin is the next step? | 16:26 |
daemontool | *I think* | 16:26 |
daemontool | and I might be wrong | 16:27 |
daemontool | the snapshotting feature from es | 16:27 |
daemontool | is similar to what we do with lvm | 16:27 |
daemontool | but the es built in snapshotting | 16:27 |
daemontool | offer a solution to execute backups of specific index/documents | 16:27 |
daemontool | so ddieterly if you need a quick solution, I see the following options | 16:28 |
daemontool | 1) execute a fs backup + lvm snapshot on each elastic search node | 16:28 |
daemontool | 2) create a job to execute a script (i.e. with curl) that will create a snapshot using the elasticsearch buitin snapshot | 16:29 |
daemontool | and then there's another job that will backup that files in the file system, we need to understand where are stored that files in the filesystem by es when the snapshot is triggered | 16:29 |
ddieterly | i think 1 is not an option because of db consistency concerns | 16:30 |
m3m0 | we can use sessions for that | 16:30 |
vannif | you pass the location to es as part of the curl invocation I think | 16:30 |
daemontool | ddieterly, with mongodb I did that in production in the public cloud in hp mahy time | 16:30 |
daemontool | and every time the backup was consistent | 16:30 |
vannif | I agree about the consistency issues with 1) | 16:30 |
daemontool | but the data wasn't sharded | 16:30 |
daemontool | so | 16:31 |
daemontool | there are two possible consistency issues there | 16:31 |
vannif | s/issue/concern | 16:31 |
daemontool | 1) half index in memory - half data written to the disk, generating data corruption | 16:31 |
daemontool | 2) inconsistencies with data sharded across multiple nodes | 16:31 |
daemontool | do you agree with that? | 16:31 |
ddieterly | yes | 16:32 |
daemontool | ok | 16:32 |
daemontool | for 1) I thin elasticsearch like mongo | 16:32 |
daemontool | writed the journal log file in the same directory as where the data is stored | 16:32 |
daemontool | so if a snapshot with lvm is created (ro snap, immutable) | 16:32 |
daemontool | the data doesn't change | 16:32 |
daemontool | the backup is executed | 16:32 |
daemontool | and the data is crash consistent | 16:33 |
daemontool | which would like, the porwer suddenly goes way on that node | 16:33 |
daemontool | anyone see any issue here? | 16:33 |
daemontool | so we need to understand if elastic search store journal logs | 16:33 |
daemontool | I think so | 16:33 |
daemontool | but I might be wrong | 16:33 |
vannif | and that might change | 16:34 |
daemontool | all good so far? | 16:34 |
ddieterly | i think so | 16:34 |
m3m0 | yes, time is a concern and we have 3 more topics, should we continue with this or move forward? | 16:34 |
daemontool | vannif, in mongodb the data is stored on the same directory | 16:34 |
daemontool | /var/lib/mongo | 16:34 |
daemontool | m3m0, one sec | 16:34 |
daemontool | this is critical | 16:34 |
daemontool | sorry | 16:34 |
daemontool | because the #1 solution would be easy to implement for your needs | 16:35 |
daemontool | as no code needs to be written | 16:35 |
daemontool | for the issue #2 | 16:35 |
daemontool | we have a feature called job session ddieterly | 16:35 |
ddieterly | yes, i like 1 then ;-) | 16:35 |
daemontool | I just explain, that you decide guys | 16:35 |
daemontool | :) | 16:35 |
daemontool | on #2 | 16:35 |
daemontool | job session is used to execute backup at near the same time on multiple nodes | 16:36 |
daemontool | and that can be used to solve the shards inconsistencies | 16:36 |
daemontool | I think | 16:36 |
daemontool | before writing code | 16:36 |
daemontool | it's worth to test this | 16:36 |
daemontool | because it's fast | 16:36 |
ddieterly | i don't think that that would give any guarantees | 16:36 |
daemontool | and will help us to improve job session | 16:36 |
vannif | well. from what understand es has 2 ways of writing data: by default it writes data to all the shards before returning a positive ack to the user. that would result in all the shards having the data in their disks or journals | 16:36 |
m3m0 | but I don't know why we want to backup all nodes, are not supposed to be replicas of the master node? | 16:37 |
daemontool | m3m0, it depends, | 16:37 |
daemontool | elastic search to scale and recude I/O | 16:37 |
daemontool | s/recude/reduce/ | 16:37 |
ddieterly | we need to back up all shards | 16:37 |
daemontool | split the data on multiple nodes, called shards | 16:37 |
vannif | another way is less secure: write data to the master and return a positive ack to the user. *then* replicate | 16:37 |
daemontool | ddieterly, ++ | 16:37 |
daemontool | I think with job session | 16:38 |
daemontool | the solution can be acceptable | 16:38 |
daemontool | because we have the same issue anyway | 16:38 |
daemontool | even if we use the snapshotting | 16:38 |
daemontool | built in feature in es | 16:38 |
daemontool | that needs to be execute | 16:38 |
daemontool | at near same time | 16:38 |
daemontool | across all the nodes | 16:38 |
ddieterly | i'm not liking that; no guarantees | 16:38 |
daemontool | vannif, please off line if you can explain better the job session to ddieterly | 16:39 |
daemontool | ? | 16:39 |
ddieterly | depends on timing | 16:39 |
daemontool | ddieterly, yes I agree | 16:39 |
daemontool | in helion all the nodes are synced with an ntp node | 16:39 |
vannif | sure | 16:39 |
daemontool | but yes, you are right | 16:39 |
daemontool | no doubt about that, it is best effort | 16:39 |
daemontool | ddieterly, are you comfortable to test that? | 16:39 |
daemontool | or do you want to go with other solutions? | 16:39 |
ddieterly | so, #1 seems reasonable if it guarantees consistency | 16:40 |
daemontool | I think if the writes of es are atomic | 16:40 |
daemontool | the consistency should be OK | 16:40 |
daemontool | but | 16:40 |
daemontool | 100% consistency cannot be guaranteed | 16:40 |
daemontool | :( | 16:40 |
daemontool | it's a computer science challenge to execute two actions exactly on the same time on multipel nodes | 16:41 |
daemontool | not only our problem | 16:41 |
ddieterly | the only way that 100% consistency can be guaranteed seems to be to use the snapshot feature of es | 16:41 |
daemontool | ok | 16:41 |
daemontool | then my advice yould be | 16:41 |
daemontool | would be | 16:41 |
daemontool | to write a script | 16:41 |
daemontool | that execute the snapshot with curl | 16:41 |
daemontool | and then execute the backup of data as fs backup with the agent | 16:42 |
daemontool | that wouldn't require writing code | 16:42 |
vannif | I think #1 is reasonable, even though it relies on some assumptions. It does not require any new backup-mode anyway. we can leave an elasticsearch-mode for direct interaction with es (i.e. request a snapshot) | 16:42 |
ddieterly | if we can snapshot to each node, then we can just back that up with freezer | 16:42 |
daemontool | ddieterly, yes | 16:42 |
daemontool | that was #2 | 16:42 |
daemontool | now, we can dedice this even tomorrow | 16:42 |
ddieterly | so, we need to investigate whether es can do that | 16:43 |
daemontool | yes | 16:43 |
ddieterly | if so, that seems the best plan | 16:43 |
daemontool | ddieterly, ok | 16:43 |
daemontool | are you comfortable? can we move forward? | 16:43 |
ddieterly | if not, then see if we can do #1 | 16:43 |
daemontool | please vannif if you can explain job session also to ddieterly offline? | 16:43 |
ddieterly | i'll setup a meeting | 16:43 |
daemontool | so we can move on the other topoic | 16:43 |
daemontool | we can do a hangout meeting | 16:43 |
daemontool | so I can participate | 16:43 |
daemontool | as you want | 16:44 |
daemontool | or an irc meeting | 16:44 |
ddieterly | google hangout? | 16:44 |
daemontool | yes | 16:44 |
ddieterly | sure, i'll set that up | 16:44 |
daemontool | hangout I thin kis better | 16:44 |
daemontool | ok | 16:44 |
daemontool | ty | 16:44 |
ddieterly | np | 16:44 |
daemontool | m3m0, let's run fast :) | 16:44 |
m3m0 | #topic cinder and nova backups | 16:44 |
m3m0 | what's the status on this? | 16:45 |
daemontool | Mr reldan | 16:45 |
daemontool | :) | 16:45 |
reldan | We have nova ephemeral disk backup (not incremental), cindernative backup (cannot be done on attached images (should be from liberty)), cinder backup (non-incremental) | 16:46 |
m3m0 | is this working now? | 16:46 |
reldan | Currently we cannot make a backup of whole vm with attached volumes | 16:46 |
daemontool | reldan, that what I think we need | 16:47 |
daemontool | because currently no one is providing a solution for taht | 16:47 |
daemontool | like nova + attached volumes | 16:47 |
daemontool | s/nova/nova vm/ | 16:47 |
reldan | Yes for nova with epehemeral, No - for nova with bootable cinder volume (can be done throug cinder-backup) | 16:47 |
m3m0 | can we inspect the vm and check if it has attached volumes and then execute nova and/or cinder backups? | 16:48 |
daemontool | m3m0, yes from the API | 16:48 |
daemontool | from the Nova API | 16:48 |
daemontool | frescof, please provide your inputs if any ^^ | 16:48 |
reldan | And probably we have problem with auth_url v3 | 16:48 |
m3m0 | why? | 16:49 |
reldan | I don’t know. But I saw that it cannot authorize (trying to use wrong http address or something like that) | 16:50 |
daemontool | mmhhh | 16:50 |
reldan | m3m0: We can expect attached volumes - yes | 16:50 |
daemontool | we should be able to do that | 16:50 |
reldan | But there are still a problem with consistency | 16:50 |
daemontool | reldan, at least the orchestration of backing up vms + attached volumes | 16:51 |
reldan | Any backup/snapshot on attached volume can be corrupted | 16:51 |
daemontool | I think it should be provided | 16:51 |
daemontool | why? | 16:51 |
daemontool | is crash consistent anyway | 16:51 |
reldan | because we use —force to do so | 16:51 |
daemontool | it's like backing up /var/lib/mysql with lvm without flushing the in memory data of mysql | 16:52 |
daemontool | there's no other way to do that from outside the vm | 16:53 |
daemontool | I think >( | 16:53 |
reldan | I suppose the same. | 16:53 |
m3m0 | unless we define a new mode? in freezer that inspect the architecture of the vm and execute internal and external backups | 16:53 |
m3m0 | accordingly | 16:54 |
daemontool | I think | 16:54 |
daemontool | that make sense | 16:54 |
daemontool | but it's up to the user | 16:54 |
daemontool | if he want's to use | 16:54 |
reldan | But if we want to have a backup that contains (let’s say) 3 cinder volumes, 1 nova instance with information about where we should mount each volume - we should define such format | 16:54 |
m3m0 | but wait, each volume is a backup right? | 16:54 |
daemontool | m3m0, yes | 16:55 |
reldan | If I understand it correct, the goal - is implementing full backup of instance with all attached volumes. In this case we should implement ephemeral disk backup, backup of each volume and metainformation - how to restore it | 16:55 |
reldan | how to reassemble instance | 16:56 |
reldan | It’s like metabackup of backups | 16:56 |
m3m0 | the instance should be up and running again, is not freezer responsability to do that | 16:56 |
m3m0 | the jobs for restore should only contain paths | 16:56 |
reldan | So if you terminate your instance- you cannot restore it? | 16:57 |
m3m0 | nop | 16:57 |
daemontool | mmhhh | 16:57 |
m3m0 | you need somewhere to restore it | 16:57 |
daemontool | I think probably we need to keep it a bit simple, or we go through a dark sea | 16:57 |
m3m0 | we can have this discussion offline | 16:58 |
reldan | Let’s just say we have two openstack installation. If I understand the task correct - we should be able to create a backup in installation1 and restore the same configuration in instalation2 | 16:58 |
daemontool | yes | 16:58 |
daemontool | so we can offer disaster recovery capabilities | 16:59 |
daemontool | let's do this | 16:59 |
m3m0 | I dissagre | 16:59 |
reldan | In this case it would be great to create and discuss blue print | 16:59 |
daemontool | I'll write a bp fo rthis stuff | 16:59 |
daemontool | the | 16:59 |
daemontool | and we can then discuss on that | 16:59 |
daemontool | change it and so on | 16:59 |
daemontool | m3m0, is that ok? | 16:59 |
m3m0 | yes, of course | 16:59 |
reldan | yes | 16:59 |
daemontool | ok | 16:59 |
m3m0 | we are running late | 16:59 |
daemontool | let's move forward | 16:59 |
daemontool | yep | 17:00 |
m3m0 | and we have 2 more topics | 17:00 |
m3m0 | should we do it next week? | 17:00 |
m3m0 | python freezer client and list of backups | 17:00 |
daemontool | let's to id 5 minutes now | 17:00 |
daemontool | s/id/it/ | 17:00 |
daemontool | python freezerclient | 17:00 |
daemontool | let's skipit | 17:00 |
daemontool | but list of backups | 17:00 |
daemontool | if fundamental that we have it in mitaka | 17:00 |
daemontool | vannif, ^^ | 17:01 |
daemontool | is essential... | 17:01 |
daemontool | we need to be able to list backups and restore using the scheduler | 17:01 |
m3m0 | yes, and it's not complicated, the ui has that funcionality already | 17:01 |
daemontool | retrieving data at least form the api | 17:01 |
daemontool | m3m0, yep | 17:01 |
m3m0 | its a matter of replicate that | 17:01 |
daemontool | vannif, can you do that please? | 17:01 |
daemontool | or m3m0 if your workload on the web ui | 17:01 |
daemontool | it's not huge | 17:02 |
vannif | yes | 17:02 |
daemontool | vannif, ok thank you | 17:02 |
daemontool | then we'll move that stuff | 17:02 |
daemontool | in the python-freezerclient | 17:02 |
daemontool | ok | 17:02 |
vannif | I | 17:02 |
daemontool | You | 17:02 |
daemontool | ol | 17:02 |
daemontool | lol | 17:02 |
vannif | I've started to look at how to use cliff for the freezerclient | 17:02 |
m3m0 | I'm very busy but I can do that if vannif is busy as well | 17:03 |
daemontool | vannif, yes but we cannot do that for now | 17:03 |
daemontool | vannif, can do that | 17:03 |
daemontool | sorry | 17:03 |
*** samuelBartel has quit IRC | 17:03 | |
daemontool | we cannot do that | 17:03 |
daemontool | for now | 17:03 |
vannif | you mean no cliff ? | 17:03 |
daemontool | we can do that after we split the code | 17:03 |
daemontool | yes | 17:03 |
vannif | oh. ok. it's quicker then :) | 17:03 |
m3m0 | wait wait | 17:03 |
m3m0 | list from scheduler and the split? | 17:04 |
daemontool | list from scheduler can be doen now | 17:04 |
daemontool | python-freezerclient split code can be done now | 17:04 |
daemontool | python-freezerclient using cliff after the split | 17:04 |
m3m0 | we can split vannif in 2 | 17:04 |
daemontool | haha | 17:04 |
daemontool | even in 3 | 17:04 |
daemontool | we can cut it in 3 | 17:04 |
m3m0 | the italian way of doing bussines :P | 17:05 |
daemontool | and doing sausages | 17:05 |
m3m0 | ok guys what's the veredict? | 17:05 |
daemontool | ok | 17:05 |
daemontool | so | 17:05 |
daemontool | vannif, implement the job listing | 17:05 |
daemontool | I do the python-freezerclient split | 17:05 |
daemontool | after that | 17:05 |
vannif | ok | 17:06 |
m3m0 | #agree | 17:06 |
daemontool | we can use cliff on the freezerclient | 17:06 |
daemontool | ++ | 17:06 |
daemontool | ok | 17:06 |
daemontool | is that all? | 17:06 |
m3m0 | yes | 17:06 |
m3m0 | for now... | 17:06 |
daemontool | I'm going to write | 17:06 |
m3m0 | ok guys thanks to all for your time | 17:07 |
daemontool | the bp for nova and cinder? | 17:07 |
daemontool | ok | 17:07 |
m3m0 | perfect | 17:07 |
m3m0 | do that daemontool | 17:07 |
daemontool | I'll do it m3m0 | 17:07 |
daemontool | lol | 17:07 |
daemontool | :) | 17:07 |
daemontool | you pleae cut vannif in 3 | 17:07 |
m3m0 | #endmeeting | 17:07 |
openstack | Meeting ended Thu Jan 14 17:07:34 2016 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 17:07 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/openstack_freezer_14_01_2016/2016/openstack_freezer_14_01_2016.2016-01-14-16.01.html | 17:07 |
ddieterly | ciao! | 17:07 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/openstack_freezer_14_01_2016/2016/openstack_freezer_14_01_2016.2016-01-14-16.01.txt | 17:07 |
openstack | Log: http://eavesdrop.openstack.org/meetings/openstack_freezer_14_01_2016/2016/openstack_freezer_14_01_2016.2016-01-14-16.01.log.html | 17:07 |
m3m0 | too thin | 17:07 |
daemontool | ddieterly, Salut :) | 17:08 |
ddieterly | bonjour | 17:08 |
vannif | beware, I'm gonna take thai chi classes, I' ll be a freaking shaolin monk soon ;) | 17:09 |
vannif | are we going to meet with hangouts ? about the sessions ? | 17:09 |
ddieterly | yea, what's the best way to set that up? | 17:09 |
ddieterly | i tried inviting you via email | 17:10 |
m3m0 | dude I know jiu-jitsu | 17:12 |
ddieterly | vannif: what's your gmail account? | 17:16 |
vannif | I don't see any email message (corporate) | 17:16 |
vannif | fabrizio.vanni@gmail.com | 17:16 |
daemontool | ddieterly, yes but let's send an email | 17:18 |
daemontool | or I don't know | 17:18 |
daemontool | now I cannot do that | 17:18 |
daemontool | can we do that tomorrow? | 17:18 |
ddieterly | sure | 17:18 |
daemontool | ok I'm going now | 17:19 |
*** samuelBartel has joined #openstack-freezer | 17:20 | |
vannif | I won't be available tomorrow. I took the day off, but maybe I can manage to be online around this time ... | 17:27 |
ddieterly | ok | 17:29 |
*** emildi has joined #openstack-freezer | 17:30 | |
*** reldan has quit IRC | 17:42 | |
*** reldan has joined #openstack-freezer | 17:45 | |
ddieterly | vannif: something happened to google chrome | 17:47 |
ddieterly | anyway, i'll schedule a mtg for next week | 17:47 |
daemontool | ddieterly, you need to pay the internet bill :P | 17:47 |
daemontool | ok | 17:47 |
vannif | :) | 17:53 |
*** reldan has quit IRC | 18:01 | |
*** daemontool has quit IRC | 18:01 | |
*** ddieterly has quit IRC | 18:04 | |
*** reldan has joined #openstack-freezer | 18:59 | |
*** ddieterly has joined #openstack-freezer | 20:53 | |
*** pennerc has quit IRC | 21:04 | |
*** reldan has quit IRC | 21:19 | |
*** ddieterly has quit IRC | 23:51 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!