17:07:43 #startmeeting 17-12-2015 17:07:44 Meeting started Thu Dec 17 17:07:43 2015 UTC and is due to finish in 60 minutes. The chair is daemontool. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:07:45 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:07:48 The meeting name has been set to '17_12_2015' 17:08:13 Hi all, I understand few of us are in holidays 17:08:24 reldan, do you want to go first_ 17:08:25 ? 17:08:54 Yes, sure. I have a parallel backup still in review. 17:09:02 ok 17:09:09 so If I understand well 17:09:12 we have an exception 17:09:24 when the user does not have the permission to write to the remote directory in the ssh node 17:09:31 And it actually takes so long, I even don’t understand why actually 17:09:32 or the directory in the remote node does not exists 17:09:34 right_ 17:09:39 ? 17:09:47 Nope, it should create a directory if it doesn’t exists 17:09:55 ok 17:10:11 it creates the full directory tree ? 17:10:11 Anyway this exception from my point of view has no relation to parallel backup 17:10:31 like /home/user/directory 17:10:36 if user/directory does not exists 17:10:39 Yes, I’m not sure what it will be if you already has a directory with wrong rights 17:10:50 But it creates all subdirectories as well 17:10:53 ok 17:11:14 so I've tested it with swift and ssh and it works 17:11:25 but I'm testing other edge cases 17:11:27 Great! it works as well for me 17:11:27 like the one mentioned 17:12:06 Last days I hear a lot about tenant backup 17:12:11 ok 17:12:16 But don’t see any document or blueprint or proposal 17:12:23 yes I agree 17:12:30 we don't have one yet 17:12:34 I have to write it 17:12:57 I heard as well that we need multi region backup 17:13:04 But don’t see any blueprint 17:13:16 I also saw a mail about billing/metrics 17:13:28 And don’t know any proposal about it 17:13:44 multi region backups_ 17:13:45 ? 17:14:06 Like if you have a tenant in two regions. Let’s say us-west and us-east 17:14:07 I think we already supports that 17:14:21 ah ok, related to tenants backup 17:14:27 Yes, sure 17:14:33 Multi region tenant backup 17:14:40 I don’t know how it should works 17:15:01 If we talk about cinder backup - let’s say cindernatie 17:15:17 generally the os_auth_url changes 17:15:21 It means that we should have some superbackup with links to both local backup in two regions or what 17:15:22 like credentials are the same 17:16:03 for what I understand 17:16:09 let's say we have a user 17:16:25 that has 1 vm on region-a 17:16:29 and 1 vm on region-b 17:16:45 So when I hear question - do we support multi region tenant backup? I don’t know what to answer. Because I have no definition of tenant backup and multi region tenant backup 17:16:47 the credentials are the same, what changes is the os_auth_url 17:17:01 the answer is nope 17:17:11 I mean 17:17:13 you can do it 17:17:16 or how is difficult to implement it 17:17:28 but it's about configuring 17:17:36 a backup for each region 17:17:38 for th same tenant 17:17:43 so in the example I was providing 17:17:49 1vm -> region-a 17:17:53 1 vm -> region-b 17:18:06 same tenant owns them 17:18:11 Yes 17:18:22 but to access one region 17:18:29 And we should save superbackup with information about all backups in different regions 17:18:34 or metabackup 17:19:08 so currently 17:19:16 so supports that we need to have two backups 17:19:22 indipendent 17:19:41 Yes, and we need some bluepring about tenant backup 17:19:43 I think currently a quick workaround to provide this feature would be to create 2 jobs 17:19:52 1 for reagion-a and 1 for region-b 17:19:56 With definition, with description how it should work 17:19:59 and the 2 jobs a part of the same session 17:20:10 job session 17:20:25 but the think is, that we don't have tenant based backups 17:20:26 now 17:20:34 Yes 17:20:35 so it would be a manual process 17:20:54 i.e. 1 job for volumes, 1 job for vms, 1 job for the users, 1 job for networks 17:20:54 etc 17:21:05 Yes, and restore should be manual as well 17:21:06 all of them belonging to the same job session 17:21:13 unfortunately yes 17:21:24 I think that if the two backups are unrelated then using a session to link them might even be counterproductive: if one backup fails, the whole sessione fails 17:21:25 so we need to find a way to automatically discover 17:21:34 well 17:21:36 it make sense 17:21:42 because 17:21:57 if you are a tenant and you want to backup your vms with the volumes and users and networks 17:22:04 if one of them fails 17:22:13 the others should fails too 17:22:24 or when you restore, let's say without networks, ot users 17:22:31 it's not going to work 17:22:45 what do you think? 17:22:57 then they *are* related 17:23:12 in that case yes. it makes sense ^^ 17:23:28 I think they are related... 17:23:47 so we need to find a way 17:23:51 to automate all this 17:24:01 by retrieving the data from the api in json format 17:24:02 save it 17:24:13 and then re upload it for the restore 17:24:15 something like devstack does 17:24:22 when creating roles, tenatns, networks, 17:24:29 I would like to have some blueprint with description what we actually want to have. And what we mean by tenant backup. How it should be connected with cinder native backup. The format of backup etc… 17:24:33 + vms, volumes etc 17:24:43 yes 17:25:15 I think for Mitaka we need to make sure the backup session works 17:25:19 the job session 17:25:24 the an important thing 17:25:39 anyway yes I-m goign to write the bp 17:25:41 Friday 17:25:45 I have a long flight to do 17:25:51 and I-ll write couple of blueprints 17:26:19 It sounds very good. Thank you. 17:27:12 ok ty 17:27:16 I also feel some problem 17:27:28 with parallel backups and cindernative backup 17:27:41 because cindernative means one storage 17:27:48 swift in the same region 17:27:48 ah ok 17:28:30 So cindernative it is only swift and only one storage 17:28:43 So it a skew in all our architecture 17:28:44 yes 17:29:03 well that's an intrinsic limitation of using cindernative 17:29:24 the only thing we can do is prepare a bp 17:29:32 e talk with the cinder guys 17:29:49 couple of weeks ago had a conversation with the nova ptl 17:29:53 about something similar 17:30:03 and he wasn't very interested 17:30:24 I think for now we have to write clearly that limitation 17:30:31 and provide advanced feature 17:30:36 with the other approaches 17:30:42 and after our approach works 17:30:52 we can see how to natively integrate it to cinder and nova 17:31:21 I have feel that we trying to solve different tasks in freezer-agent. 1) It’s file backup (without any knowledge about OpenStack) 2) OpenStack specific backups 17:32:22 well, it's the same task 17:32:30 it's just different the way 17:32:41 We don’t use tar for cindernative, we don’t use encryption for cindernative, we don’t use compression ... 17:32:41 we achieve the backup and restoer execution 17:32:47 yes 17:32:55 exactly 17:33:25 let's implement 17:33:35 and improve the approaches we have now 17:33:38 for cinder backups 17:33:44 other than cindernative 17:34:13 then we can see what would be the better approach 17:34:23 cause we can add backup modes in cinder and nova 17:34:27 Probably - but we don’t support incremental for cinder (not native) 17:34:39 we have to do that 17:34:46 that is our limitation 17:35:19 in the meantime 17:35:32 I writing a bp 17:35:34 for the tenant backup 17:35:35 I also going to replace cinder/v1 to cinder/v2 17:35:39 ok 17:35:41 good 17:35:45 It’s all from my side 17:35:51 ok 17:36:04 I submitted couple of changes in governance 17:36:10 that was requested by the os tc 17:36:25 about how we do release 17:36:35 also I'm working to get more people on board with the project 17:36:40 I think on February 17:36:49 we should have 3 people more 50% of their time at least 17:36:53 I hope even more 17:37:00 It’s good ) 17:37:28 I also would like to have sprints, if we have distributed team now 17:37:32 also I'm working with one customer to add servers to opensta-infra 17:37:36 not directly related to us 17:37:44 but it's good for os anyway 17:38:03 I'm also working on the python-freezerclient repo creation and code split 17:38:24 https://review.openstack.org/#/c/255349/ 17:38:37 that requires quite a few tasks 17:39:23 like 17:39:25 changing the name 17:39:27 and so on 17:39:39 so to be ready to add our project to the openstackclient 17:40:37 after that and the bp 17:40:47 I'll rework the bloc kbased incremental 17:41:24 I can help with block based incremental 17:41:47 yes let's talk about that after Christmass holidays 17:41:47 Because now I have an abstraction layer 17:41:52 yep >( 17:41:54 :) 17:42:07 that's all from me 17:42:10 szaher, vannif Slashme do you have anything to add? 17:42:58 Do you know, have openstack projects some sort of sprints? 17:43:01 or milestones? 17:43:03 no 17:43:12 reldan, yes 17:43:24 12 of June is Mitaka-2 17:43:40 vannif, there's a bp for the cinder backups? 17:43:50 what's the activity there? 17:44:23 by 12th of June we need to branch liberty and prepare the repo for Mitaka-2 17:45:12 ok, so the weekly meeting will stop for Christmass holidays 17:45:26 we'll restart the 7th of January 17:45:45 I would like to adapt this approach, you know - have a well prepared blue prints and decision that we are going to do and some priortity 17:46:02 yes reldan I agree 17:46:14 we need to create also the openstack-specs repo 17:46:16 no. I don't think there's any bp for cinder 17:46:20 sorry 17:46:22 freezer-specs 17:46:51 I think we also need to rationalize how to get the lists of backups 17:46:51 vannif, ok if you could write a bp describing the activity you are doing that'd be good, otherwise no one knows 17:47:01 sure 17:47:28 vannif, ty 17:47:28 regarding the bakup listing, browsing the api is doable with a relatively low effort 17:47:37 yes we need to provide a way to list the backups 17:47:43 I think it's a bare minimum feature 17:47:53 but when there's no api, it's the agent that has to get the list 17:47:55 vannif, ok let's do taht 17:48:03 yes 17:48:54 and we need to agree and document that feature in case of local storages, containers, ssh storage 17:49:33 And we need to define metadata format for parallel backup 17:49:47 it's not (always) a simple list of files. the listing involves metadata 17:49:52 yes 17:50:00 #agreed 17:54:35 ok 17:54:36 thanks all 17:54:39 #endmeeting