14:02:42 #startmeeting freezer 14:02:43 Meeting started Thu Mar 3 14:02:42 2016 UTC and is due to finish in 60 minutes. The chair is ddieterly. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:02:44 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:02:47 The meeting name has been set to 'freezer' 14:02:52 hi all 14:02:55 hi everybody, again 14:02:55 o/ 14:02:58 o/ 14:03:10 daemontool_: don't you want to run the mtg? 14:03:17 sure 14:03:23 yea, you should 14:03:24 ping zhangjn 14:03:25 go for it 14:03:46 #topic Incremental volumes backup 14:03:48 ping zhangjn 14:04:01 0/ 14:04:08 ping zhurong 14:04:13 \0 14:04:43 meetings notes and topics available at https://etherpad.openstack.org/p/freezer_meetings 14:04:58 first it Incremental volumes backup 14:05:02 s/it/is/ 14:05:14 Well, cinder native backup support incremental , can we use it ? 14:05:23 EinstCrazy, yes, we already use it 14:05:39 reldan, did the firs timplementation 14:05:43 that's from mitaka 14:06:08 https://review.openstack.org/#/c/276685/ 14:06:09 now 14:06:24 I think there we need at least to improvement 14:06:34 1) backup of the volume metadata 14:06:41 2) backup deletion and retention 14:06:55 reldan, last time you mentinoed few points to clarify 14:06:57 regards 1 14:07:02 like where are we going to store the metadata 14:07:05 you mean we need to store the metadata of native backup? 14:07:05 right? 14:07:09 yes 14:07:43 reference http://docs.openstack.org/admin-guide-cloud/blockstorage_volume_backups.html 14:07:46 daemontool_: Yes, I would like to have a common structure for cinder/nova and fs backups 14:07:57 daemontool_: Because now - is something absolutely different 14:08:18 reldan, yes 14:08:28 ok 14:08:35 we've a base class now, haven't we? 14:09:01 EinstCrazy: Yes, for fs backups. But not for cinder and nova 14:09:54 For cinder native - we actually don’t write any information by our own 14:10:02 only api call 14:10:23 reldan, yes, that's a feature we need to have 14:10:24 :( 14:10:33 we basically needs to wrap around the existing 14:10:36 cinder backup apis 14:10:55 daemontool_: Yes, we have it now. But probably we should keep information in our own storage 14:11:00 Like metadata about backup 14:11:15 we need to have a common structure for this metadata 14:11:16 But what if someone delete the backup by cinder api? 14:11:29 reldan, yes 14:11:41 and inside this metadata we can have let’s say field type: {cinder, cinder-native, nova, fs, ...} 14:11:44 EinstCrazy, concurrency is an issue 14:11:56 reldan, ++ 14:11:58 We need to upload the swift after the backup 14:12:13 I believe we should define metadata format 14:12:21 I think we need to think a sync of metadata 14:12:30 yangyapeng, do you mean upload the metadata to swift? 14:12:54 daemontool_, yangyapeng: I suppose to any storage - swift, local, ssh ... 14:12:56 he means upload the backup itself and metadata 14:13:01 reldan, yes 14:13:11 yes 14:13:18 EinstCrazy, if we use the cinder api to execute backups 14:13:20 and inside this metadata we should have information for restore 14:14:07 can't remember.... 14:14:10 reldan, do you remember per chance 14:14:17 if cinder store directly the volumes backup to swift 14:14:24 or pass thru glance? 14:14:58 daemontool_: cinder native - directly to swift or ceph, our version - through glance 14:15:05 ok 14:15:10 ty 14:15:11 I think the data itself should not be manager by cinder and freezer both 14:15:29 EinstCrazy, I agree they are two different way of doing backups 14:15:31 Not only depends on the cinder 14:15:42 with cindr native 14:15:47 we just use the cinder api 14:16:11 while with the cinder backup we create a snapshot, upload to glance 14:16:24 and process with the freezer-agent from there 14:16:29 we may sync the metadata in time? 14:16:39 EinstCrazy, we can 14:16:47 the reason we need to backup the metadata 14:16:56 is because if we have it, we are able to restore the volume 14:17:04 yes, I agree 14:17:06 even on a new openstack cloud deployment 14:17:10 only for that 14:17:23 so 14:17:33 please correct me if I'm wrong 14:17:35 steps are 14:17:41 1) define common metadata 14:17:50 2) include metadata backup for cinder native 14:18:13 3) provide in cinder native features to retrieve the backup list, delete and backup retention 14:18:14 where is the voluem metadata store? 14:18:26 in one of the media storage supported by freezer 14:18:30 such as 14:18:36 I've a question, why we use elasticsearch? 14:18:37 swift, ssh node, local fs 14:18:44 EinstCrazy, that's a different topic 14:18:50 let's iron this out first 14:19:08 does that 3 steps make sense_ 14:19:08 ? 14:19:13 I'd also add 14:19:15 for me yes 14:19:22 4) store metadata in the freezer-api if avaialble 14:19:33 all good with this? 14:19:34 EinstCrazy: Because we took a bad decision at some point 14:19:44 slashme, I disagree 14:20:02 but we let's first close the current topic please 14:20:07 yes 14:20:39 so EinstCrazy zhangjn yangyapeng do you want to be involved on the design and implementation of that? 14:20:46 I mean the 4 points mentioned? 14:20:48 I think the volume metadata will be stored in database. 14:21:04 Question 14:21:17 zhangjn, what happen if you do not have the freezer-api available? 14:21:36 Does that mean than any type of backup would be able to store its backup metadata in freezer's db ? 14:22:05 slashme, I think that should be the case, but not a blocker if not 14:22:24 the metadata should be in the storage media 14:22:27 and the api 14:22:32 I think 14:22:45 we may use a single db for each agent in a cluster 14:22:46 Just to be sure, we are talking about the backup metadata (ie tar, ...) or the execution metadata ? 14:23:00 in this case 14:23:03 is volume metadata 14:23:15 so probably is comparable to the tar/rsycn metadata 14:23:18 freezer is a backup/restore service. if this service crash, we haven't recovery anything. 14:23:31 Okay, that's what I understood. 14:23:43 zhangjn, exactly, so it's important to have also the metadata in the storage media 14:24:01 yes i agree 14:24:26 So we would have a new parameter like store_backup_metadata=freezer_db/storage_media 14:24:28 zhangjn, yangyapeng EinstCrazy anyone of you is interested on working on this with reldan ? 14:24:29 ? 14:24:52 yes, we may work on this 14:24:59 slashme, probably not, we should store it in the api and storage media used 14:25:02 always 14:25:10 if the api are not available 14:25:19 metadata should be stored only in the media storage 14:25:31 I'm saying this, because 14:25:52 if we provide that option, we give the opportunity to the user to shoot on his/her foots 14:26:09 am I the only one to see that risk? 14:26:15 What about a backup with 10 000 files ? Aren't metadata size going to be an issue ? 14:26:23 I can define new metadata. Unfortunatelly I still have no requirements about tenant. So I can consider for now, that we have only fs backup with mysql, msserver, mongo, cinder-native, cinder and nova backups 14:27:15 We should have 2 types of metadata. 1) Freezer metadata 2) tar specific, volume specific, ... 14:27:20 slashme, to store them where? in the api? 14:27:27 yes, in the api 14:27:40 the list of files probably shouldn't be stored in the api 14:27:42 we shouldn’t store tar metadata in api 14:27:47 exactly 14:28:02 it should be stored as compressed binary 14:28:06 inly in the storage media 14:28:13 s/inly/only/ 14:28:14 daemontool_: +1 14:28:26 Okay, Then that means the storing of backup metadata (not execution ones) in the api would be limited to volume backup ? 14:28:44 I thin kwe have to store common metadata backups 14:28:49 for all thebackups 14:28:55 but probably we have to decide 14:28:57 the limit 14:29:02 the verbosity 14:29:04 for cinder backup - i would prefer to store cinder specific backup on storage 14:29:22 and only freezer metadata in api 14:29:27 reldan, information like volume uuid and things like that? 14:29:46 Reldan this is what we do with tar. No ? 14:29:54 slashme, yes 14:29:57 yes 14:30:16 So we keep the same behaviour with cinder ? 14:30:22 It may be something like: type: cinder-native, uuid: … 14:30:28 yes 14:30:40 we would like to have common format for freezer metadata 14:30:42 +1 to that then 14:30:46 ok 14:30:53 can we move to the next topic? 14:30:53 it should be common for tar/cinder/nova/cinder-native/mysql/... 14:30:54 +1 reldan 14:31:28 yes, we can I suppose. 14:31:36 #topic Nova instances incremental backups 14:31:48 https://etherpad.openstack.org/p/tenant-backup 14:31:49 most of the considerations from previous topic apply 14:31:56 zhangjn, tyu 14:33:07 so 14:33:16 frescof, are you on? 14:34:20 ok we need fresco on thi 14:34:20 s 14:34:22 next topic 14:34:35 #topic Disaster recovery BP 14:34:41 what's the status of this? 14:34:48 frescof, ^^ 14:35:10 We are still discussing this 14:35:25 it is possible to add the consideration in the review? 14:35:40 It can't definitly not be implemented in the way described in the BP. 14:35:48 ok 14:35:52 For the same reason we exposed before 14:35:53 so we need to add there the reasons 14:36:02 add the considerations 14:36:05 But we figured an idea 14:36:08 send the bp to the community 14:36:10 and get feedback 14:36:15 slashme, that's fantastic 14:36:22 but we need to get that added to the review 14:36:25 it's very important 14:36:28 cause this is a big feature 14:36:47 let's discuss that offline on #openstack-freezer 14:36:51 Idealy I would like that topic to be freezer until the midcycle 14:36:52 ok? 14:37:03 s/freezed/freezer/ 14:37:03 ok 14:37:05 np 14:37:08 let's do that 14:37:14 movign forward 14:37:17 #topic Block based incremental rsync based 14:37:22 so I'm working on this 14:37:35 by the next meeting Thu 14:37:40 we have a first review availabole 14:37:53 Nice. 14:37:55 now it's easier after reldan abstraction classes 14:38:01 this is a blocker for nova incrementals 14:38:03 so we need it 14:38:09 I can focus on it now 14:38:14 I have a question about this. 14:38:17 Nice 14:38:23 slashme, yes 14:38:43 What kind of performance increase are we talking about with pypy ? 14:38:59 like 10x 14:39:08 My point is, is rsync based backup still viable without it ? 14:39:18 it depends 14:39:21 if it is only 10 times I'd say yes 14:39:33 slashme, if you have 14:39:41 files let's say 14:39:45 of 1 MB 14:39:55 and then 50MB are appended to it 14:40:00 the thing is bloody slow 14:40:05 can we use a different interpreter ? or are we bound to cpython only? 14:40:11 that's the use case were pypy is really needed 14:40:24 I don't know, I've tried with pypy and it is fast 14:40:30 but let's have the code working 14:40:34 then we can optimize later on 14:40:41 yes 14:40:49 thx for the precision 14:40:53 :) 14:40:55 next 14:41:01 #topic tempest tests 14:41:06 ddieterly, ^^ 14:41:09 any comments on that? 14:41:10 yes 14:41:20 https://review.openstack.org/#/c/287369/ 14:41:29 i'm still trying to determine if the tess are actually being run in the gate 14:41:37 if they are, great 14:41:44 ddieterly, whichi tests_ 14:41:44 ? 14:41:50 tempest 14:41:51 the ones listed in that patch? 14:42:01 I don't think they are executed 14:42:03 yes, there is only one tests 14:42:09 right 14:42:16 not sure but the tests in that directory 14:42:19 are not executed 14:42:21 i need to get that one test to execute 14:42:25 ok 14:42:27 then we can add more 14:42:28 I'll take a look lter 14:42:30 later 14:42:32 also 14:42:37 anything to add? 14:42:45 no 14:43:36 well, if anyone knows how to get project-config to execute the tempest plugin tests that would be great 14:43:54 I think the solution is in tox 14:43:54 i added a new gate job for the api 14:44:01 ah ok 14:44:03 no, i don't think so 14:44:15 I'll take a look at it 14:44:15 we need to add some project in the project-config 14:44:16 ok 14:44:51 next topic 14:44:55 #topic python-freezerclient new repo adn package creation 14:45:02 what needs to be done for the python-freezerclient? 14:45:05 m3m0, ^^ 14:45:23 so far the ui is using the python-freezerclient 14:45:34 is not complete yet but is progressing quite well 14:45:46 so next steps are 14:45:50 1) create pypi package 14:45:56 also the cliff is very nice as well 14:45:58 2) create the new repo 14:46:10 3) have the freezer-scheduler use it 14:46:18 the next steps are to create the pypi repo and openstack/python-freezerclient repo 14:46:19 4) we need to write all the documentation for it 14:46:22 ok 14:46:36 and I'm adding the sphinx docs as well 14:46:55 One big question on this one. Do we try to have it in Mitaka ? 14:47:05 daemontool 14:47:10 slashme, yes 14:47:10 ^^ 14:47:16 My opinion is yes as well 14:47:18 m3m0, does is preserve the git history? 14:47:22 yes it does 14:47:25 ok 14:47:43 m3m0, are the tests that were availble in apiclient 14:47:53 currently used in python-freezerclient? 14:48:14 yes, but I need to get the history for them as well 14:48:48 then let me know 14:48:51 when that is done please 14:49:01 sure, I will 14:49:09 anythin to add to this topic? 14:49:12 no 14:49:13 next 14:49:16 #topic specs repo 14:49:21 we have the repo 14:49:29 https://github.com/openstack/freezer-specs 14:49:33 so please use that for specs 14:49:37 I'm goign to add some other info 14:49:39 and the structure 14:49:45 likely the other openstack projects 14:49:49 explaining also the process 14:49:57 anything to add to this topic? 14:49:59 About the specs 14:50:31 slashme, yes 14:50:47 ? 14:50:57 I think we need a sample for this 14:51:02 EinstCrazy, yes 14:51:07 referance https://github.com/openstack/nova-specs 14:51:09 What is the process ? Add a spec with general description of a feature/architecture, When it is merged, then write extensive blueprint with how implementation will work ? 14:51:22 I'll write that in that repo README 14:51:26 Or is it the opposite ? 14:51:40 we'll use the same approach as nova https://github.com/openstack/nova-specs 14:51:53 before gerrit, merged approved, than launchpad 14:52:05 from my point of view 14:52:06 the bp 14:52:10 as more information it have 14:52:12 better is 14:52:12 use sphinx formant and publish to http://specs.openstack.org 14:52:23 Okay. +1 to that 14:52:23 zhangjn, ++ 14:52:32 I'll add it by EOW 14:52:38 anything else to dd to this topic? 14:52:50 next 14:52:53 #topic what needs to be done for Mitaka 3? 14:53:00 tentative> 14:53:03 1) rsync 14:53:12 2) backup metadata 14:53:19 3) cinder backup metadata 14:53:28 4) python-freezerclient 14:53:33 5) tempest 14:53:38 sounds? 14:53:40 list_backup command 14:53:49 6) list_backup 14:53:55 from the python-freezerclient 14:54:13 6 is already included 14:55:18 so 14:55:19 anything to add 14:55:20 ? 14:55:26 moving next 14:55:30 #topic When are we going to freeze features for Mitaka? 14:56:21 Friday 18th of March? 14:56:39 we need to respect http://releases.openstack.org/mitaka/schedule.html 14:57:23 ok so if there are no objection let's set that date 14:57:25 anything to add? 14:57:56 next 14:58:00 #topic #topic Easier to use devstack installation freezer,Including the tempest test, etc 14:58:17 zhangjn, ^^ do you have any input on this? 14:58:29 yangyapeng, ^^? 14:58:32 I totally agree 14:58:41 :) 14:58:46 :) 14:59:01 yangyapeng, do you want to take ownership of this? 14:59:06 we are running out of time 14:59:15 let's have this discussion at #openstack-freezer 14:59:23 OK 14:59:25 ok 14:59:40 have no time to discussion. 14:59:50 #endmeeting freezer 14:59:57 thx daemontool_ 15:00:38 ok meeting is over, thanks all 15:00:53 ciao 15:00:59 thx daemontool 15:01:17 #endmeeting 15:03:40 test 15:03:49 you passed 15:03:54 gouthamr: lol 15:03:55 meeting? 15:04:00 Glad I wasn't the only paranoid that my IRC client was bugged out 15:04:03 thought there was IRC lag 15:04:12 hi 15:04:24 hi 15:04:30 \o 15:04:32 was the meeting already started ? 15:04:36 welcome to the freezer meeting 15:04:37 toabctl: no 15:04:38 nope 15:04:40 ok :-) 15:04:45 bswartz: Error: Can't start another meeting, one is in progress. Use #endmeeting first. 15:04:45 hi 15:04:48 hi 15:04:48 hi 15:04:48 Hi 15:04:55 o/ 15:04:59 * bswartz slaps openstack around a bit with a large trout 15:05:00 hello o/ 15:05:04 #endmeeting