17:07:43 <daemontool> #startmeeting 17-12-2015
17:07:44 <openstack> Meeting started Thu Dec 17 17:07:43 2015 UTC and is due to finish in 60 minutes.  The chair is daemontool. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:07:45 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:07:48 <openstack> The meeting name has been set to '17_12_2015'
17:08:13 <daemontool> Hi all, I understand few of us are in holidays
17:08:24 <daemontool> reldan,  do you want to go first_
17:08:25 <daemontool> ?
17:08:54 <reldan> Yes, sure. I have a parallel backup still in review.
17:09:02 <daemontool> ok
17:09:09 <daemontool> so If I understand well
17:09:12 <daemontool> we have an exception
17:09:24 <daemontool> when the user does not have the permission to write to the remote directory in the ssh node
17:09:31 <reldan> And it actually takes so long, I even don’t understand why actually
17:09:32 <daemontool> or the directory in the remote node does not exists
17:09:34 <daemontool> right_
17:09:39 <daemontool> ?
17:09:47 <reldan> Nope, it should create a directory if it doesn’t exists
17:09:55 <daemontool> ok
17:10:11 <daemontool> it creates the full directory tree ?
17:10:11 <reldan> Anyway this exception from my point of view has no relation to parallel backup
17:10:31 <daemontool> like /home/user/directory
17:10:36 <daemontool> if user/directory does not exists
17:10:39 <reldan> Yes, I’m not sure what it will be if you already has a directory with wrong rights
17:10:50 <reldan> But it creates all subdirectories as well
17:10:53 <daemontool> ok
17:11:14 <daemontool> so I've tested it with swift and ssh and it works
17:11:25 <daemontool> but I'm testing other edge cases
17:11:27 <reldan> Great! it works as well for me
17:11:27 <daemontool> like the one mentioned
17:12:06 <reldan> Last days I hear a lot about tenant backup
17:12:11 <daemontool> ok
17:12:16 <reldan> But don’t see any document or blueprint or proposal
17:12:23 <daemontool> yes I agree
17:12:30 <daemontool> we don't have one yet
17:12:34 <daemontool> I have to write it
17:12:57 <reldan> I heard as well that we need multi region backup
17:13:04 <reldan> But don’t see any blueprint
17:13:16 <reldan> I also saw a mail about billing/metrics
17:13:28 <reldan> And don’t know any proposal about it
17:13:44 <daemontool> multi region backups_
17:13:45 <daemontool> ?
17:14:06 <reldan> Like if you have a tenant in two regions. Let’s say us-west and us-east
17:14:07 <daemontool> I think we already supports that
17:14:21 <daemontool> ah ok, related to tenants backup
17:14:27 <reldan> Yes, sure
17:14:33 <reldan> Multi region tenant backup
17:14:40 <reldan> I don’t know how it should works
17:15:01 <reldan> If we talk about cinder backup - let’s say cindernatie
17:15:17 <daemontool> generally the os_auth_url changes
17:15:21 <reldan> It means that we should have some superbackup with links to both local backup in two regions or what
17:15:22 <daemontool> like credentials are the same
17:16:03 <daemontool> for what I understand
17:16:09 <daemontool> let's say we have a user
17:16:25 <daemontool> that has 1 vm on region-a
17:16:29 <daemontool> and 1 vm on region-b
17:16:45 <reldan> So when I hear question - do we support multi region tenant backup? I don’t know what to answer. Because I have no definition of tenant backup and multi region tenant backup
17:16:47 <daemontool> the credentials are the same, what changes is the os_auth_url
17:17:01 <daemontool> the answer is nope
17:17:11 <daemontool> I mean
17:17:13 <daemontool> you can do it
17:17:16 <reldan> or how is difficult to implement it
17:17:28 <daemontool> but it's about configuring
17:17:36 <daemontool> a backup for each region
17:17:38 <daemontool> for th same tenant
17:17:43 <daemontool> so in the example I was providing
17:17:49 <daemontool> 1vm -> region-a
17:17:53 <daemontool> 1 vm -> region-b
17:18:06 <daemontool> same tenant owns them
17:18:11 <reldan> Yes
17:18:22 <daemontool> but to access one region
17:18:29 <reldan> And we should save superbackup with information about all backups in different regions
17:18:34 <reldan> or metabackup
17:19:08 <daemontool> so currently
17:19:16 <daemontool> so supports that we need to have two backups
17:19:22 <daemontool> indipendent
17:19:41 <reldan> Yes, and we need some bluepring about tenant backup
17:19:43 <daemontool> I think currently a quick workaround to provide this feature would be to create 2 jobs
17:19:52 <daemontool> 1 for reagion-a and 1 for region-b
17:19:56 <reldan> With definition, with description how it should work
17:19:59 <daemontool> and the 2 jobs a part of the same session
17:20:10 <daemontool> job session
17:20:25 <daemontool> but the think is, that we don't have tenant based backups
17:20:26 <daemontool> now
17:20:34 <reldan> Yes
17:20:35 <daemontool> so it would be a manual process
17:20:54 <daemontool> i.e. 1 job for volumes, 1 job for vms, 1 job for the users, 1 job for networks
17:20:54 <daemontool> etc
17:21:05 <reldan> Yes, and restore should be manual as well
17:21:06 <daemontool> all of them belonging to the same job session
17:21:13 <daemontool> unfortunately yes
17:21:24 <vannif> I think that if the two backups are unrelated then using a session to link them might even be counterproductive: if one backup fails, the whole sessione fails
17:21:25 <daemontool> so we need to find a way to automatically discover
17:21:34 <daemontool> well
17:21:36 <daemontool> it make sense
17:21:42 <daemontool> because
17:21:57 <daemontool> if you are a tenant and you want to backup your vms with the volumes and users and networks
17:22:04 <daemontool> if one of them fails
17:22:13 <daemontool> the others should fails too
17:22:24 <daemontool> or when you restore, let's say without networks, ot users
17:22:31 <daemontool> it's not going to work
17:22:45 <daemontool> what do you think?
17:22:57 <vannif> then they *are* related
17:23:12 <vannif> in that case yes. it makes sense ^^
17:23:28 <daemontool> I think they are related...
17:23:47 <daemontool> so we need to find a way
17:23:51 <daemontool> to automate all this
17:24:01 <daemontool> by retrieving the data from the api in json format
17:24:02 <daemontool> save it
17:24:13 <daemontool> and then re upload it for the restore
17:24:15 <daemontool> something like devstack does
17:24:22 <daemontool> when creating roles, tenatns, networks,
17:24:29 <reldan> I would like to have some blueprint with description what we actually want to have. And what we mean by tenant backup. How it should be connected with cinder native backup. The format of backup etc…
17:24:33 <daemontool> + vms, volumes etc
17:24:43 <daemontool> yes
17:25:15 <daemontool> I think for Mitaka we need to make sure the backup session works
17:25:19 <daemontool> the job session
17:25:24 <daemontool> the an important thing
17:25:39 <daemontool> anyway yes I-m goign to write the bp
17:25:41 <daemontool> Friday
17:25:45 <daemontool> I have a long flight to do
17:25:51 <daemontool> and I-ll write couple of blueprints
17:26:19 <reldan> It sounds very good. Thank you.
17:27:12 <daemontool> ok ty
17:27:16 <reldan> I also feel some problem
17:27:28 <reldan> with parallel backups and cindernative backup
17:27:41 <reldan> because cindernative means one storage
17:27:48 <reldan> swift in the same region
17:27:48 <daemontool> ah ok
17:28:30 <reldan> So cindernative it is only swift and only one storage
17:28:43 <reldan> So it a skew in all our architecture
17:28:44 <daemontool> yes
17:29:03 <daemontool> well that's an intrinsic limitation of using cindernative
17:29:24 <daemontool> the only thing we can do is prepare a bp
17:29:32 <daemontool> e talk with the cinder guys
17:29:49 <daemontool> couple of weeks ago had a conversation with the nova ptl
17:29:53 <daemontool> about something similar
17:30:03 <daemontool> and he wasn't very interested
17:30:24 <daemontool> I think for now we have to write clearly that limitation
17:30:31 <daemontool> and provide advanced feature
17:30:36 <daemontool> with the other approaches
17:30:42 <daemontool> and after our approach works
17:30:52 <daemontool> we can see how to natively integrate it to cinder and nova
17:31:21 <reldan> I have feel that we trying to solve different tasks in freezer-agent. 1) It’s file backup (without any knowledge about OpenStack) 2) OpenStack specific backups
17:32:22 <daemontool> well, it's the same task
17:32:30 <daemontool> it's just different the way
17:32:41 <reldan> We don’t use tar for cindernative, we don’t use encryption for cindernative, we don’t use compression ...
17:32:41 <daemontool> we achieve the backup and restoer execution
17:32:47 <daemontool> yes
17:32:55 <daemontool> exactly
17:33:25 <daemontool> let's implement
17:33:35 <daemontool> and improve the approaches we have now
17:33:38 <daemontool> for cinder backups
17:33:44 <daemontool> other than cindernative
17:34:13 <daemontool> then we can see what would be the better approach
17:34:23 <daemontool> cause we can add backup modes in cinder and nova
17:34:27 <reldan> Probably - but we don’t support incremental for cinder (not native)
17:34:39 <daemontool> we have to do that
17:34:46 <daemontool> that is our limitation
17:35:19 <daemontool> in the meantime
17:35:32 <daemontool> I writing a bp
17:35:34 <daemontool> for the tenant backup
17:35:35 <reldan> I also going to replace cinder/v1 to cinder/v2
17:35:39 <daemontool> ok
17:35:41 <daemontool> good
17:35:45 <reldan> It’s all from my side
17:35:51 <daemontool> ok
17:36:04 <daemontool> I submitted couple of changes in governance
17:36:10 <daemontool> that was requested by the os tc
17:36:25 <daemontool> about  how we do release
17:36:35 <daemontool> also I'm working to get more people on board with the project
17:36:40 <daemontool> I think on February
17:36:49 <daemontool> we should have 3 people more 50% of their time at least
17:36:53 <daemontool> I hope even more
17:37:00 <reldan> It’s good )
17:37:28 <reldan> I also would like to have sprints, if we have distributed team now
17:37:32 <daemontool> also I'm working with one customer to add servers to opensta-infra
17:37:36 <daemontool> not directly related to us
17:37:44 <daemontool> but it's good for os anyway
17:38:03 <daemontool> I'm also working on the python-freezerclient repo creation and code split
17:38:24 <daemontool> https://review.openstack.org/#/c/255349/
17:38:37 <daemontool> that requires quite a few tasks
17:39:23 <daemontool> like
17:39:25 <daemontool> changing the name
17:39:27 <daemontool> and so on
17:39:39 <daemontool> so to be ready to add our project to the openstackclient
17:40:37 <daemontool> after that and the bp
17:40:47 <daemontool> I'll rework the bloc kbased incremental
17:41:24 <reldan> I can help with block based incremental
17:41:47 <daemontool> yes let's talk about that after Christmass holidays
17:41:47 <reldan> Because now I have an abstraction layer
17:41:52 <daemontool> yep >(
17:41:54 <daemontool> :)
17:42:07 <daemontool> that's all from me
17:42:10 <daemontool> szaher, vannif Slashme  do you have anything to add?
17:42:58 <reldan> Do you know, have openstack projects some sort of sprints?
17:43:01 <reldan> or milestones?
17:43:03 <vannif> no
17:43:12 <daemontool> reldan,  yes
17:43:24 <daemontool> 12 of June is Mitaka-2
17:43:40 <daemontool> vannif, there's a bp for the cinder backups?
17:43:50 <daemontool> what's the activity there?
17:44:23 <daemontool> by 12th of June we need to branch liberty and prepare the repo for Mitaka-2
17:45:12 <daemontool> ok, so the weekly meeting will stop for Christmass holidays
17:45:26 <daemontool> we'll restart the 7th of January
17:45:45 <reldan> I would like to adapt this approach, you know - have a well prepared blue prints and decision that we are going to do and some priortity
17:46:02 <daemontool> yes reldan  I agree
17:46:14 <daemontool> we need to create also the openstack-specs repo
17:46:16 <vannif> no. I don't think there's any bp for cinder
17:46:20 <daemontool> sorry
17:46:22 <daemontool> freezer-specs
17:46:51 <vannif> I think we also need to rationalize how to get the lists of backups
17:46:51 <daemontool> vannif,  ok if you could write a bp describing the activity you are doing that'd be good, otherwise no one knows
17:47:01 <vannif> sure
17:47:28 <daemontool> vannif,  ty
17:47:28 <vannif> regarding the bakup listing, browsing the api is doable with a relatively low effort
17:47:37 <daemontool> yes we need to provide a way to list the backups
17:47:43 <daemontool> I think it's a bare minimum feature
17:47:53 <vannif> but when there's no api, it's the agent that has to get the list
17:47:55 <daemontool> vannif, ok let's do taht
17:48:03 <daemontool> yes
17:48:54 <vannif> and we need to agree and document that feature in case of local storages, containers, ssh storage
17:49:33 <reldan> And we need to define metadata format for parallel backup
17:49:47 <vannif> it's not (always) a simple list of files. the listing involves metadata
17:49:52 <daemontool> yes
17:50:00 <daemontool> #agreed
17:54:35 <daemontool> ok
17:54:36 <daemontool> thanks all
17:54:39 <daemontool> #endmeeting