09:00:13 #startmeeting karbor 09:00:14 Meeting started Tue May 23 09:00:13 2017 UTC and is due to finish in 60 minutes. The chair is yuval. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:00:15 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:00:17 The meeting name has been set to 'karbor' 09:00:30 Hello, and welcome to Karbor's weekly meeting! 09:01:10 hi 09:01:22 hey chenying 09:02:02 yedongcan has a topic about cross sites backup and restores. 09:02:06 hi 09:02:16 hi 09:02:19 hi 09:02:23 hello 09:02:36 #info chenying zhonghua jiaopengju yedongcan zengchen in meeting 09:02:56 #topic Data backup and restore between two region sites 09:04:04 yedongcan: ? 09:04:04 yedongcan Can you give more detail about this usecase. We discuss it yestoday. 09:04:39 ok 09:05:47 we have a customer, will build two datacenters. Need to implement RD in those datacenters. 09:06:11 DR? 09:06:27 sorry, DR. 09:06:32 :) 09:06:33 disaster recovery 09:07:06 we may use keystone federation 09:07:09 There are two openstack cloud in these two datacenters? 09:07:21 chenying: yes 09:07:34 yedongcan: as we spoke in karbor channel, if karbor's bank is shared between the providers in the two datacenters, this is possible 09:08:36 yuval: ok, just some question, which data will be backup? 09:09:41 yedongcan: karbor protect->restore flow: you create a plan, consisting of a pre-configured provider and the list of resources you would like to protect. Then you call protect on the plan, and a checkpoint is created. Then you restore the checkpoint in the second site 09:10:06 yedongcan: so you can select any resource out of: server, network, volume, image, etc 09:11:47 yuval The bank is a concept of karbor. I care about that using the same bank backend (swift or ceph), Can the user access the data in two regions? The ceph or the ceph deployed in two regions(datacenters) 09:12:13 yuval: ok, how does the network build in another site? 09:13:29 yedongcan: networks, subnets, ports are recreated on the target. Restore is parameterized, so addresses can be changed 09:15:03 yuval, you mean just rebuild data in db? 09:15:15 yedongcan: which db? 09:15:22 yuval: neutron db 09:16:02 yedongcan: new network, subnets, and ports are created, so eventually new records enter the neutron db 09:16:31 yuval: and then neutron agent will sync data from neutron db, then build the virtual network device, flows and so on? 09:16:41 yedongcan rebuild new neutron resources in neutron, not only the data in db. 09:17:09 yedongcan: karbor doesn't care about neutron internal implementation, it uses the neutron api to recreate the resources 09:19:52 yuval: ok, If a user have a public ip for business, so the public ip will also changed? 09:20:53 yedongcan: that's where parameterized restore fits in - you state that information when restoring 09:23:52 yedongcan It depend on the the parameter of restore, user can use the some public ip in the backup data, or using a new ip in the new sites. 09:24:57 yuval, chenying: I'm not sure about it , we map public ip and fixed ip in neutron, fixed ip had changed, so the mapping will invalid. 09:28:20 It means that we new build a new mapping about public ip and fixed ip in new site's neutron 09:28:32 s/new/need 09:29:03 chenying: yes 09:30:30 I thenk that this need be considered in restoration of the new site. 09:31:01 chenying: I agree 09:31:56 as i talke before, The bank is a concept of karbor. I care about that using the same bank backend (swift or ceph), Can the user access the data in two regions? The ceph or the ceph deployed in two regions(datacenters) 09:35:25 chenying: I'll have a quick confirm 09:37:14 yedongcan: anything else on this topic? 09:37:33 ping jiaopengju 09:38:24 yuval: nothing, thanks. 09:38:28 hi 09:38:34 #topic Open Discussion 09:38:38 hey jiaopengju 09:38:49 yuval Last time, we discuss the solution about file backup in karbor without agent. 09:39:13 we are thinking about using rclone in vm to backup the file. 09:40:34 jiaopengju did a test about rclone. 09:41:05 chenying: and? 09:41:59 So Yuval, Do you think that introduce a new tool like rclone in vm to backup file is a good choice in this use case? 09:43:13 chenying: I think that file backup is probably not the solution for applications on the cloud 09:44:02 chenying: specifically rclone - where will rclone copy the data to? 09:44:22 It mean that the plugin need a ssh connection to access the vm. 09:44:46 yuval: reclone the swift or ceph the bank of karbor. 09:45:15 chenying: what if the vm doesn't have an ssh service running? 09:45:34 chenying: but the bank is an abstraction, if you rclone to the bank, you broke the abstraction 09:46:08 The original requirement is proposed by jiaopengju. So what's your oppion about this solution using ssh not agent. 09:46:41 chenying: It is ok, for some use cases 09:48:30 yuval: the bank has create a correct section for the file backup, using rclone backup the file section of bank. 09:49:10 I think we should consider many more scenes, such as users have changed their ssh port, or the vm have no ssh service, etc 09:49:15 chenying: the bank is an abstraction, it can be swift, ceph, s3, file system, database, or a proprietary data store 09:49:57 chenying: also, exposing details of the bank to the protection plugin is a leaky abstraction 09:51:35 yuval So the file backup plugin only need call the cmd in vm, don't need use the bank API? just pass the backend config of bank to vm using ssh 09:52:27 chenying: it doesn't have to be a backend config of the bank, but yes 09:54:19 jiaopengju care about that we may not thinking carefully about this file backup solution. 09:55:24 Until now, nobody raised that 09:55:36 jiaopengju: are you here? 09:56:00 yes 09:56:35 jiaopengju Do you want to write this bp about this file backup solution? So we can discuss more detail or scenes in the bp's patch. 09:56:46 chenying: good idea 09:56:58 I would like do 09:57:15 jiaopengju you can propose the scenes you care about. 09:57:30 great 09:57:37 ok 09:57:37 with that, I'll end the meeting as we are out of time 09:57:47 OK 09:57:47 thanks for attending 09:57:55 #endmeeting