16:01:02 #startmeeting Cinder 16:01:03 Meeting started Wed Oct 31 16:01:02 2018 UTC and is due to finish in 60 minutes. The chair is jungleboyj. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:07 The meeting name has been set to 'cinder' 16:01:12 o/ 16:01:13 o/ 16:01:16 <_alastor_> o/ 16:01:17 o/ 16:01:17 hi 16:01:19 o/ 16:01:25 Hi 16:01:32 hi 16:01:43 hey 16:01:45 courtesy ping jungleboyj diablo_rojo, diablo_rojo_phon, rajinir tbarron xyang xyang1 e0ne gouthamr thingee erlon tpsilva ganso patrickeast tommylikehu eharney geguileo smcginnis lhx_ lhx__ aspiers jgriffith moshele hwalsh felipemonteiro lpetrut lseki _alastor_ whoami-rajat yikun rosmaita 16:01:53 @! 16:01:53 <_pewp_> jungleboyj (。・∀・)ノ 16:01:57 hi! o/ 16:02:51 hello 16:03:28 Ok. Looks like we have a lot of people already so we can get started 16:03:35 #topic announcements 16:04:02 Just a reminder that I have created Forum Etherpads for Berlin: 16:04:15 #link https://wiki.openstack.org/wiki/Forum/Berlin2018 16:04:48 Please take a look and add your thoughts there. Hopefully we will have good discussion there. 16:05:25 I think that is all I have for announcements. 16:05:30 Anything to add smcginnis ? 16:05:58 Hmm, not that I can think of, but I'll probably think of something later. 16:06:06 Ok. Sounds good. 16:06:27 #topic Cinder get together in Berlin? 16:06:32 is the etherpad really slow or is it just me? 16:06:44 rosmaita: It is slow. I was having issues with it earlier. 16:06:47 rosmaita, same with me. 16:07:09 So, a number of groups are having get togethers at Berlin. 16:07:27 are people interested in trying to do something in Berlin? 16:08:06 * jungleboyj hears crickets 16:08:27 "if you plan it, they will come" 16:08:28 I am 16:09:06 rosmaita: True enough. :-) 16:09:17 it always good the get together 16:09:36 Would be fun. Have to collected an informal show of hands who will be in Berlin? 16:09:39 erlon: +1 16:09:56 I did in the last meeting. There are a few of us. 16:10:20 I will be there, smcginnis, geguileo , e0ne 16:10:39 #link https://etherpad.openstack.org/p/BER-cinder-outing-planning 16:10:50 There is an etherpad that we can use to plan the event. 16:11:27 Though etherpad is not happy at the moment so it is hard to start filling that in now. 16:12:33 smcginnis: You seemed to have an idea of what nights are already busy? 16:13:03 Sounded like Thursday might be the best night? 16:13:35 I have something Monday. Other nights should be OK. 16:13:36 +1 16:13:44 Folks are probably leaving Thursday night. 16:13:51 So Wednesday might be best? 16:14:11 Ok. I don't have anything planned yet so Wednesday should be fine for me. 16:14:41 I don't have anything planned either 16:16:04 Wednesday night is the Meet and Geek Pub Crawl but people could join that later and would rather get time with the team. 16:16:36 Oh right, forgot about that. 16:16:52 We could combine to two as well. 16:17:04 Otherwise we could do it after the Market Place Mixer on Tuesday. 16:18:51 I don't have a strong preference. 16:19:12 So, I will put together the etherpad linked on the meeting agenda when etherpad is working again. 16:19:34 Will send an e-mail to the mailing list to get people involved. 16:19:50 People can then indicate their interest in joining and we can touch on it again next week. 16:20:04 Sound like a plan? 16:20:38 I will take that as a yes. 16:20:55 #topic New Cinder Incremental backup flow 16:21:07 daikk115: Your floor. 16:21:16 thank jungleboyj 16:21:33 I'm using Ceph as a backend for Cinder, so the backup flow is so weird 16:22:00 Ok. 16:22:04 Oh, you mean differential backup according to the description on the etherpad. 16:22:05 the first one must be full backup and others always incremental 16:22:19 In backup parlance, that is not incremental. 16:22:20 smcginnis, yep. 16:23:19 I feel like we talked about this at the PTG. 16:23:20 I don't follow 16:23:29 RBD backups are incremental, right? 16:23:36 So differential has been discussed before. We can't support that since we don't do change block tracking to know what parts to backup. 16:23:50 Ah, that is right. 16:24:01 what I want to have is "multiple full backups and every new incremental backup should be based on latest full backup" 16:24:15 Please stop saying incremental. 16:24:32 daikk115: there is a BZ to allow full backups after a full backup 16:24:41 and there is a patch to allow that 16:24:45 https://bugs.launchpad.net/cinder/+bug/1790713 i think there was a discussion regarding this in a meeting and this bug was filed. 16:24:45 Launchpad bug 1790713 in Cinder "Ceph RBD backup driver cannot create full backups after the first full backup" [Undecided,In progress] - Assigned to Sofia Enriquez (lsofia-enriquez) 16:24:47 That's another issue with ceph. 16:24:54 that way it will follow the incremental flag 16:24:57 geguileo: Ah, that was what we talked about at the PTG. 16:24:57 geguileo, I have tested that. 16:25:13 daikk115: and is it OK? 16:25:16 but new "differential backup" always base on the first full 16:25:34 BUt this is different. We have full and incremental backups already, just a bug in the ceph driver. Differential is different. 16:25:47 And something we've discussed a few times already and determined we can't/won't do. 16:26:01 smcginnis: +1 16:26:05 we could do differential just like we do it now 16:26:08 smcginnis, the usecase is create full backup for the first day of week. 16:26:18 both for RBD and for chuncked 16:26:19 I know the use case. 16:26:21 and every day in that week we create diff backup. 16:26:37 for the new week, we do the samething 16:26:37 daikk115: it can be done, are you willing to work on it? 16:26:52 geguileo: How can it be done? 16:26:59 geguileo: I don't think it can be done. At least in the past, there were some backends that claimed it would not work for them. 16:27:03 jungleboyj: in which driver RBD or chunked one? 16:27:16 I've fine if someone can figure it out, but they can't just look at ceph and assume everything works the same way. 16:27:18 RBD 16:27:24 *I'm 16:27:26 smcginnis: we are not talking about "perfect differential", but a differential similar to our incremental 16:27:28 smcginnis: ++ 16:27:43 My statement still stands. :) 16:28:00 yeah, I think it could be done in RBD as well, though we would need to do tests to figure it out 16:28:16 smcginnis: +1 16:28:31 I believe it can be done, but like smcginnis said, someone would have to confirm it and work on it 16:28:57 is this patch serving the similar purpose or otherwise ? https://review.openstack.org/#/c/612503/ geguileo 16:28:57 But as my WIP patchset, the idea from my side is having new column to store "base" for each "differential backups" 16:28:58 Ok. 16:29:30 this my propose https://review.openstack.org/#/c/614469/ 16:29:31 You shouldn't need to store that. That can be determined when needed. 16:29:51 whoami-rajat, that patch only help to create more full backups, that's 16:29:54 whoami-rajat: No, that was just fixing a bug where once you did an incremental backup, that was all you could do. 16:29:59 daikk115: how will it work for nfs and swift? 16:30:41 daikk115: I have just read the commit message and it doesn't sound right 16:30:53 e0ne, that's question. I'm don't know about that backend, but just think new column will not affect other backend 16:31:22 daikk115: it's a bad idea to add a new column just for a one backend 16:31:24 geguileo, sure, my commit message did not clear enough 16:31:26 geguileo: ++ 16:31:56 daikk115: what are you trying to fix? 16:31:59 daikk115: jungleboyj ok, seems like that should be supported first to have multiple full backups 16:32:15 e0ne, I know but the idea is parent_id and base_id should be present together for any backend 16:32:23 whoami-rajat: Correct. That is a known issue that needs to get resolved. 16:32:23 Get rid of the new column and all the unnecessary shifting of code and it might be easier to see. 16:32:33 in case we want create multile full backup and multiple incremental/diff from them 16:32:56 daikk115: I believe we can do that now, since we have the link to the parent 16:33:08 geguileo: +1\ 16:33:14 daikk115: Oh, so you want to have the option to create a differential backup from a full backup other than the most recent? 16:33:18 one incremental can be parent of other incremental 16:33:22 right? geguileo 16:33:22 so we can have N backups to 1 parent backup relationship 16:33:46 daikk115: yes, it can be 16:33:54 so we don't know which full backup is base full backup for new incremental/diff? 16:34:00 daikk115: actually that's how it works right now, the parent is the latest backup 16:34:08 that's why it's incremental and not differential at the moment 16:34:31 if we want to allow user to specify the parent, we would just have to modify the API, and a couple of places 16:34:35 WIth incremental, it's always the last backup. Seems odd to want to do an incremental or differntial from something other than the last full backup. 16:35:13 incremental is always from the last backup (incremental or full) 16:35:14 smcginnis, for Ceph backup, it always create new snap in first full 16:35:17 differential is from the last full 16:35:29 did not create snap in latest full backup 16:35:29 So I think 1) this needs a spec actually spelling out what you're trying to accomplish here clearly, 2) prototype code for more than just ceph. 16:35:32 daikk115: the snapshot is to set a marker 16:35:44 daikk115: so we can then request the diff between that point and current point 16:35:54 I think we are going into implementation details 16:36:07 smcginnis: ++ 16:36:19 smcginnis: +1 16:36:19 Too complicated to just pound out in code. 16:36:33 e0ne: Are you still working on this: https://specs.openstack.org/openstack/cinder-specs/specs/stein/generic-backup-implementation.html 16:36:39 geguileo, but cinder did not allow to delete the first full(last week full backup)? 16:36:43 it's a day when I always agree with smcginnis 16:36:49 :) 16:36:58 e0ne: Good place to be. 16:37:01 I think it's not too complicated if you are very familiar with the code, but complex otherwise, too many variables/options 16:37:12 smcginnis: yep. I'll publish patches early next week to statr discussion 16:37:15 :-) 16:37:18 daikk115: you cannot delete it because you have dependent backups (the incremental ones) 16:37:20 e0ne: Awesome! 16:37:23 e0ne: ++ 16:37:53 I rebased my old patch and split it into the chain 16:38:00 Oh nice. 16:38:05 geguileo, we should let new incremental backup know that it should be depend on last full backup not the first full is not 16:38:07 daikk115: once we fix the RBD problem and you can create full backups whenever you want, then you will be able to delete all incremental and the old full 16:38:14 need to test it more and clean up before publishing 16:38:16 first full backup 16:38:23 daikk115: that's how it works right now! 16:38:34 daikk115: it's not based on the last full, but the last incremental 16:38:39 daikk115: in ALL backup drivers 16:39:00 that is not real use case we have which smcginnis also know 16:39:24 daikk115: I'm saying what we HAVE, not want you would like to have (aka your use case) 16:39:53 daikk115: if that's not the case, then it would be a bug in the RBD driver (because it would be doing differential instead of incremental) 16:40:06 and that's not what it used to do from the start, so it would have been changed at some point 16:40:15 unnitentionally 16:40:24 daikk115: Let's get a spec written up that we can all read and make sure we're talking about the same solution. 16:40:42 daikk115: And just to reiterate, it needs to be something that doesn't only apply to how ceph works. 16:40:50 I think that sounds like a good plan since this is not clear right now. 16:40:56 smcginnis: ++ 16:40:57 daikk115: sounds good 16:41:36 smcginnis: +1 16:41:43 smcginnis, the above link is not the same problem. 16:41:59 What above link? 16:42:32 #action daikk115 To create a Spec for discussion. 16:42:34 smcginnis, Oops, sorry, never mind 16:42:51 jungleboyj, Ok, I will to that 16:42:56 s/to/do 16:43:02 #info enriquetaso 16:43:19 enriquetaso: You have been informed. :) 16:43:25 ? 16:43:33 :) 16:43:36 jungleboyj: I think we can move on. 16:43:41 enriquetaso: Info 16:43:42 ok :D thanks 16:43:50 With pleasure. 16:43:52 sorry, i'm late 16:44:04 enriquetaso: Ah. No problem. Welcome to the party. 16:44:10 enriquetaso, hi 16:44:36 @! 16:44:36 <_pewp_> jungleboyj (◍˃̶ᗜ˂̶◍)ノ” 16:44:43 So, moving on. 16:45:16 #topic User Feedback Etherpad. 16:45:36 Can anyone get to the etherpad right now> 16:45:49 They just upgraded the instance I think. 16:46:52 #link https://etherpad.openstack.org/p/BER-Cinder_User_Survey_Responses 16:46:53 So, I have a cached copy here. So let me at least share what I did. 16:46:58 jungleboyj, I can but quite slow 16:47:04 Ok. 16:47:23 hi daikk115 , geguileo thanks for discuss the incremental option for backups 16:47:44 So, I got the translated feedback for the user feedback survey from the Foundation. 16:47:50 jungleboyj: Thanks for categorizing into common themes. That helps. 16:47:56 jungleboyj: I can't get to it, and the outing planning seems to have no content... r:-?? 16:48:10 geguileo: Correct at the moment. 16:48:16 smcginnis: You are welcome. 16:48:20 jungleboyj: OK, I can access now :-) 16:48:22 jungleboyj: thanks 16:48:26 Infra is aware of the issue. 16:48:52 As smcginnis has idicated I looked through the feedback and documented the common themes for the comments. 16:48:59 Fell into 12 categories. 16:49:56 I have put some initial thoughts in there but given that all of you have expertise in different areas here I would appreciate all of you adding responses here. 16:50:23 What I am hoping is that we will get a good number of people in the forum summit session that maybe can help us understand the feedback given that it is so vague. 16:50:46 They want backup/disaster recovery improvements but they don't say what. 16:51:03 I think we have already addressed the question of automated backup processes. 16:51:32 I think all of the requests for multi-attach support is likely to be Ceph users. 16:51:48 That short-coming is being addressed. Correct? 16:51:53 jungleboy: we did. everybody can use mistral for such automation 16:53:38 So, I think, looking at the list that we have a number of things in flight that address the comments. 16:54:30 #link https://review.openstack.org/#/c/595827/ Ceph multiattach spec 16:54:39 If you know of details/patches/specs that apply to the comments please add them here so that we can be prepared to address comments/questions from anyone that makes it to the Forum Session. 16:54:48 smcginnis: Case in point. Thank you! 16:55:40 Wow. Now weird things happening in etherpad. 16:55:59 Anyone have anything else there they can update now? 16:56:46 Did we do read only multiattach? 16:57:04 I remember that being something we were going to follow on the initial work to do, but I didn't think anything had been done yet. 16:57:10 smcginnis: Hmmm, you know. I think that is another things that we just talked about. 16:58:18 So, that one may be one that need to be given some priority. 16:59:15 Ok. Well. We have run out of time. 16:59:27 Someone tell jgriffith he needs to work on multiattach some more. 16:59:29 Hope everyone has a safe and happy Halloween. 16:59:35 smcginnis: ++ 16:59:55 Thanks for joining the meeting and hope to talk to you all again next week! 16:59:55 🎃🎃🎃 17:00:02 :-) 17:00:14 see you :D 17:00:19 Thanks! 17:00:23 #endmeeting