14:02:42 <ddieterly> #startmeeting freezer
14:02:43 <openstack> Meeting started Thu Mar  3 14:02:42 2016 UTC and is due to finish in 60 minutes.  The chair is ddieterly. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:44 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:02:47 <openstack> The meeting name has been set to 'freezer'
14:02:52 <daemontool_> hi all
14:02:55 <ddieterly> hi everybody, again
14:02:55 <daemontool_> o/
14:02:58 <ddieterly> o/
14:03:10 <ddieterly> daemontool_: don't you want to run the mtg?
14:03:17 <daemontool_> sure
14:03:23 <ddieterly> yea, you should
14:03:24 <EinstCrazy> ping zhangjn
14:03:25 <ddieterly> go for it
14:03:46 <daemontool_> #topic  Incremental volumes backup
14:03:48 <yangyapeng> ping zhangjn
14:04:01 <zhurong> 0/
14:04:08 <yangyapeng> ping zhurong
14:04:13 <reldan> \0
14:04:43 <daemontool_> meetings notes and topics available at https://etherpad.openstack.org/p/freezer_meetings
14:04:58 <daemontool_> first it Incremental volumes backup
14:05:02 <daemontool_> s/it/is/
14:05:14 <EinstCrazy> Well, cinder native backup support incremental , can we use it ?
14:05:23 <daemontool_> EinstCrazy,  yes, we already use it
14:05:39 <daemontool_> reldan, did the firs timplementation
14:05:43 <daemontool_> that's from mitaka
14:06:08 <daemontool_> https://review.openstack.org/#/c/276685/
14:06:09 <daemontool_> now
14:06:24 <daemontool_> I think there we need at least to improvement
14:06:34 <daemontool_> 1) backup of the volume metadata
14:06:41 <daemontool_> 2) backup deletion and retention
14:06:55 <daemontool_> reldan,  last time you mentinoed few points to clarify
14:06:57 <daemontool_> regards 1
14:07:02 <daemontool_> like where are we going to store the metadata
14:07:05 <EinstCrazy> you mean we need to store the metadata of native backup?
14:07:05 <daemontool_> right?
14:07:09 <daemontool_> yes
14:07:43 <daemontool_> reference http://docs.openstack.org/admin-guide-cloud/blockstorage_volume_backups.html
14:07:46 <reldan> daemontool_: Yes, I would like to have a common structure for cinder/nova and fs backups
14:07:57 <reldan> daemontool_: Because now - is something absolutely different
14:08:18 <daemontool_> reldan, yes
14:08:28 <daemontool_> ok
14:08:35 <EinstCrazy> we've a base class now, haven't we?
14:09:01 <reldan> EinstCrazy: Yes, for fs backups. But not for cinder and nova
14:09:54 <reldan> For cinder native - we actually don’t write any information by our own
14:10:02 <reldan> only api call
14:10:23 <daemontool_> reldan, yes, that's a feature we need to have
14:10:24 <yangyapeng> :(
14:10:33 <daemontool_> we basically needs to wrap around the existing
14:10:36 <daemontool_> cinder backup apis
14:10:55 <reldan> daemontool_: Yes, we have it now. But probably we should keep information in our own storage
14:11:00 <reldan> Like metadata about backup
14:11:15 <reldan> we need to have a common structure for this metadata
14:11:16 <EinstCrazy> But what if someone delete the backup by cinder api?
14:11:29 <daemontool_> reldan,  yes
14:11:41 <reldan> and inside this metadata we can have let’s say field type: {cinder, cinder-native, nova, fs, ...}
14:11:44 <daemontool_> EinstCrazy, concurrency is an issue
14:11:56 <daemontool_> reldan,  ++
14:11:58 <yangyapeng> We need to upload the swift after the backup
14:12:13 <reldan> I believe we should define metadata format
14:12:21 <EinstCrazy> I think we need to think a sync of metadata
14:12:30 <daemontool_> yangyapeng, do you mean upload the metadata to swift?
14:12:54 <reldan> daemontool_, yangyapeng: I suppose to any storage - swift, local, ssh ...
14:12:56 <EinstCrazy> he means upload the backup itself and metadata
14:13:01 <daemontool_> reldan,  yes
14:13:11 <yangyapeng> yes
14:13:18 <daemontool_> EinstCrazy,  if we use the cinder api to execute backups
14:13:20 <reldan> and inside this metadata we should have information for restore
14:14:07 <daemontool_> can't remember....
14:14:10 <daemontool_> reldan,  do you remember per chance
14:14:17 <daemontool_> if cinder store directly the volumes backup to swift
14:14:24 <daemontool_> or pass thru glance?
14:14:58 <reldan> daemontool_: cinder native - directly to swift or ceph, our version - through glance
14:15:05 <daemontool_> ok
14:15:10 <daemontool_> ty
14:15:11 <EinstCrazy> I think the data itself should not be manager by cinder and freezer both
14:15:29 <daemontool_> EinstCrazy,  I agree they are two different way of doing backups
14:15:31 <yangyapeng> Not only depends on the cinder
14:15:42 <daemontool_> with cindr native
14:15:47 <daemontool_> we just use the cinder api
14:16:11 <daemontool_> while with the cinder backup we create a snapshot, upload to glance
14:16:24 <daemontool_> and process with the freezer-agent from there
14:16:29 <EinstCrazy> we may sync the metadata in time?
14:16:39 <daemontool_> EinstCrazy, we can
14:16:47 <daemontool_> the reason we need to backup the metadata
14:16:56 <daemontool_> is because if we have it, we are able to restore the volume
14:17:04 <EinstCrazy> yes, I agree
14:17:06 <daemontool_> even on a new openstack cloud deployment
14:17:10 <daemontool_> only for that
14:17:23 <daemontool_> so
14:17:33 <daemontool_> please correct me if I'm wrong
14:17:35 <daemontool_> steps are
14:17:41 <daemontool_> 1) define common metadata
14:17:50 <daemontool_> 2) include metadata backup for cinder native
14:18:13 <daemontool_> 3) provide in cinder native features to retrieve the backup list, delete and backup retention
14:18:14 <zhangjn> where is the voluem metadata store?
14:18:26 <daemontool_> in one of the media storage supported by freezer
14:18:30 <daemontool_> such as
14:18:36 <EinstCrazy> I've a question, why we use elasticsearch?
14:18:37 <daemontool_> swift, ssh node, local fs
14:18:44 <daemontool_> EinstCrazy, that's a different topic
14:18:50 <daemontool_> let's iron this out first
14:19:08 <daemontool_> does that 3 steps make sense_
14:19:08 <daemontool_> ?
14:19:13 <daemontool_> I'd also add
14:19:15 <reldan> for me yes
14:19:22 <daemontool_> 4) store metadata in the freezer-api if avaialble
14:19:33 <daemontool_> all good with this?
14:19:34 <slashme> EinstCrazy: Because we took a bad decision at some point
14:19:44 <daemontool_> slashme,  I disagree
14:20:02 <daemontool_> but we let's first close the current topic please
14:20:07 <EinstCrazy> yes
14:20:39 <daemontool_> so EinstCrazy  zhangjn  yangyapeng do you want to be involved on the design and implementation of that?
14:20:46 <daemontool_> I mean the 4 points mentioned?
14:20:48 <zhangjn> I think the volume metadata will be stored in database.
14:21:04 <slashme> Question
14:21:17 <daemontool_> zhangjn,  what happen if you do not have the freezer-api available?
14:21:36 <slashme> Does that mean than any type of backup would be able to store its backup metadata in freezer's db ?
14:22:05 <daemontool_> slashme,  I think that should be the case, but not a blocker if not
14:22:24 <daemontool_> the metadata should be in the storage media
14:22:27 <daemontool_> and the api
14:22:32 <daemontool_> I think
14:22:45 <EinstCrazy> we may use a single db for each agent in a cluster
14:22:46 <slashme> Just to be sure, we are talking about the backup metadata (ie tar, ...) or the execution metadata ?
14:23:00 <daemontool_> in this case
14:23:03 <daemontool_> is volume metadata
14:23:15 <daemontool_> so probably is comparable to the tar/rsycn metadata
14:23:18 <zhangjn> freezer is a backup/restore service. if this service crash, we haven't recovery anything.
14:23:31 <slashme> Okay, that's what I understood.
14:23:43 <daemontool_> zhangjn,  exactly, so it's important to have also the metadata in the storage media
14:24:01 <yangyapeng> yes i agree
14:24:26 <slashme> So we would have a new parameter like store_backup_metadata=freezer_db/storage_media
14:24:28 <daemontool_> zhangjn, yangyapeng EinstCrazy  anyone of you is interested on working on this with reldan ?
14:24:29 <slashme> ?
14:24:52 <EinstCrazy> yes, we may work on this
14:24:59 <daemontool_> slashme,  probably not, we should store it in the api and storage media used
14:25:02 <daemontool_> always
14:25:10 <daemontool_> if the api are not available
14:25:19 <daemontool_> metadata should be stored only in the media storage
14:25:31 <daemontool_> I'm saying this, because
14:25:52 <daemontool_> if we provide that option, we give the opportunity to the user to shoot on his/her foots
14:26:09 <daemontool_> am I the only one to see that risk?
14:26:15 <slashme> What about a backup with 10 000 files ? Aren't metadata size going to be an issue ?
14:26:23 <reldan> I can define new metadata. Unfortunatelly I still have no requirements about tenant. So I can consider for now, that we have only fs backup with mysql, msserver, mongo, cinder-native, cinder and nova backups
14:27:15 <reldan> We should have 2 types of metadata. 1) Freezer metadata 2) tar specific, volume specific, ...
14:27:20 <daemontool_> slashme, to store them where? in the api?
14:27:27 <slashme> yes, in the api
14:27:40 <daemontool_> the list of files probably shouldn't be stored in the api
14:27:42 <reldan> we shouldn’t store tar metadata in api
14:27:47 <daemontool_> exactly
14:28:02 <daemontool_> it should be stored as compressed binary
14:28:06 <daemontool_> inly in the storage media
14:28:13 <daemontool_> s/inly/only/
14:28:14 <reldan> daemontool_: +1
14:28:26 <slashme> Okay, Then that means the storing of backup metadata (not execution ones) in the api would be limited to volume backup ?
14:28:44 <daemontool_> I thin kwe have to store common metadata backups
14:28:49 <daemontool_> for all thebackups
14:28:55 <daemontool_> but probably we have to decide
14:28:57 <daemontool_> the limit
14:29:02 <daemontool_> the verbosity
14:29:04 <reldan> for cinder backup - i would prefer to store cinder specific backup on storage
14:29:22 <reldan> and only freezer metadata in api
14:29:27 <daemontool_> reldan,  information like volume uuid and things like that?
14:29:46 <slashme> Reldan this is what we do with tar. No ?
14:29:54 <daemontool_> slashme,  yes
14:29:57 <reldan> yes
14:30:16 <slashme> So we keep the same behaviour with cinder ?
14:30:22 <reldan> It may be something like: type: cinder-native, uuid: …
14:30:28 <reldan> yes
14:30:40 <reldan> we would like to have common format for freezer metadata
14:30:42 <slashme> +1 to that then
14:30:46 <daemontool_> ok
14:30:53 <daemontool_> can we move to the next topic?
14:30:53 <reldan> it should be common for tar/cinder/nova/cinder-native/mysql/...
14:30:54 <zhangjn> +1 reldan
14:31:28 <reldan> yes, we can I suppose.
14:31:36 <daemontool_> #topic Nova instances incremental backups
14:31:48 <zhangjn> https://etherpad.openstack.org/p/tenant-backup
14:31:49 <daemontool_> most of the considerations from previous topic apply
14:31:56 <daemontool_> zhangjn,  tyu
14:33:07 <daemontool_> so
14:33:16 <daemontool_> frescof, are you on?
14:34:20 <daemontool_> ok we need fresco on thi
14:34:20 <daemontool_> s
14:34:22 <daemontool_> next topic
14:34:35 <daemontool_> #topic Disaster recovery BP
14:34:41 <daemontool_> what's the status of this?
14:34:48 <daemontool_> frescof, ^^
14:35:10 <slashme> We are still discussing this
14:35:25 <daemontool_> it is possible to add the consideration in the review?
14:35:40 <slashme> It can't definitly not be implemented in the way described in the BP.
14:35:48 <daemontool_> ok
14:35:52 <slashme> For the same reason we exposed before
14:35:53 <daemontool_> so we need to add there the reasons
14:36:02 <daemontool_> add the considerations
14:36:05 <slashme> But we figured an idea
14:36:08 <daemontool_> send the bp to the community
14:36:10 <daemontool_> and get feedback
14:36:15 <daemontool_> slashme,  that's fantastic
14:36:22 <daemontool_> but we  need to get that added to the review
14:36:25 <daemontool_> it's very important
14:36:28 <daemontool_> cause this is a big feature
14:36:47 <daemontool_> let's discuss that offline on #openstack-freezer
14:36:51 <slashme> Idealy I would like that topic to be freezer until the midcycle
14:36:52 <daemontool_> ok?
14:37:03 <slashme> s/freezed/freezer/
14:37:03 <daemontool_> ok
14:37:05 <daemontool_> np
14:37:08 <daemontool_> let's do that
14:37:14 <daemontool_> movign forward
14:37:17 <daemontool_> #topic Block based incremental rsync based
14:37:22 <daemontool_> so I'm working on this
14:37:35 <daemontool_> by the next meeting Thu
14:37:40 <daemontool_> we have a first review availabole
14:37:53 <slashme> Nice.
14:37:55 <daemontool_> now it's easier after reldan abstraction classes
14:38:01 <daemontool_> this is a blocker for nova incrementals
14:38:03 <daemontool_> so we need it
14:38:09 <daemontool_> I can focus on it now
14:38:14 <slashme> I have a question about this.
14:38:17 <zhangjn> Nice
14:38:23 <daemontool_> slashme, yes
14:38:43 <slashme> What kind of performance increase are we talking about with pypy ?
14:38:59 <daemontool_> like 10x
14:39:08 <slashme> My point is, is rsync based backup still viable without it ?
14:39:18 <daemontool_> it depends
14:39:21 <slashme> if it is only 10 times I'd say yes
14:39:33 <daemontool_> slashme,  if you have
14:39:41 <daemontool_> files let's say
14:39:45 <daemontool_> of 1 MB
14:39:55 <daemontool_> and then 50MB are appended to it
14:40:00 <daemontool_> the thing is bloody slow
14:40:05 <m3m0> can we use a different interpreter ? or are we bound to cpython only?
14:40:11 <daemontool_> that's the use case were pypy is really needed
14:40:24 <daemontool_> I don't know, I've tried with pypy and it is fast
14:40:30 <daemontool_> but let's have the code working
14:40:34 <daemontool_> then we can optimize later on
14:40:41 <slashme> yes
14:40:49 <slashme> thx for the precision
14:40:53 <daemontool_> :)
14:40:55 <daemontool_> next
14:41:01 <daemontool_> #topic tempest tests
14:41:06 <daemontool_> ddieterly, ^^
14:41:09 <daemontool_> any comments on that?
14:41:10 <ddieterly> yes
14:41:20 <daemontool_> https://review.openstack.org/#/c/287369/
14:41:29 <ddieterly> i'm still trying to determine if the tess are actually being run in the gate
14:41:37 <ddieterly> if they are, great
14:41:44 <daemontool_> ddieterly,  whichi tests_
14:41:44 <daemontool_> ?
14:41:50 <ddieterly> tempest
14:41:51 <daemontool_> the ones listed in that patch?
14:42:01 <daemontool_> I don't think they are executed
14:42:03 <ddieterly> yes, there is only one tests
14:42:09 <ddieterly> right
14:42:16 <daemontool_> not sure but the tests in that directory
14:42:19 <daemontool_> are not executed
14:42:21 <ddieterly> i need to get that one test to execute
14:42:25 <daemontool_> ok
14:42:27 <ddieterly> then we can add more
14:42:28 <daemontool_> I'll take a look lter
14:42:30 <daemontool_> later
14:42:32 <daemontool_> also
14:42:37 <daemontool_> anything to add?
14:42:45 <ddieterly> no
14:43:36 <ddieterly> well, if anyone knows how to get project-config to execute the tempest plugin tests that would be great
14:43:54 <daemontool_> I think the solution is in tox
14:43:54 <ddieterly> i added a new gate job for the api
14:44:01 <daemontool_> ah ok
14:44:03 <ddieterly> no, i don't think so
14:44:15 <daemontool_> I'll take a look at it
14:44:15 <EinstCrazy> we need to add some project in the project-config
14:44:16 <daemontool_> ok
14:44:51 <daemontool_> next topic
14:44:55 <daemontool_> #topic python-freezerclient new repo adn package creation
14:45:02 <daemontool_> what needs to be done for the python-freezerclient?
14:45:05 <daemontool_> m3m0,  ^^
14:45:23 <m3m0> so far the ui is using the python-freezerclient
14:45:34 <m3m0> is not complete yet but is progressing quite well
14:45:46 <daemontool_> so next steps are
14:45:50 <daemontool_> 1) create pypi package
14:45:56 <m3m0> also the cliff is very nice as well
14:45:58 <daemontool_> 2) create the new repo
14:46:10 <daemontool_> 3) have the freezer-scheduler use it
14:46:18 <m3m0> the next steps are to create the pypi repo and openstack/python-freezerclient repo
14:46:19 <daemontool_> 4) we need to write all the documentation for it
14:46:22 <daemontool_> ok
14:46:36 <m3m0> and I'm adding the sphinx docs as well
14:46:55 <slashme> One big question on this one. Do we try to have it in Mitaka ?
14:47:05 <m3m0> daemontool
14:47:10 <daemontool_> slashme, yes
14:47:10 <m3m0> ^^
14:47:16 <slashme> My opinion is yes as well
14:47:18 <daemontool_> m3m0,  does is preserve the git history?
14:47:22 <m3m0> yes it does
14:47:25 <daemontool_> ok
14:47:43 <daemontool_> m3m0,  are the tests that were availble in apiclient
14:47:53 <daemontool_> currently used in python-freezerclient?
14:48:14 <m3m0> yes, but I need to get the history for them as well
14:48:48 <daemontool_> then let me know
14:48:51 <daemontool_> when that is done please
14:49:01 <m3m0> sure, I will
14:49:09 <daemontool_> anythin to add to this topic?
14:49:12 <m3m0> no
14:49:13 <daemontool_> next
14:49:16 <daemontool_> #topic specs repo
14:49:21 <daemontool_> we have the repo
14:49:29 <daemontool_> https://github.com/openstack/freezer-specs
14:49:33 <daemontool_> so please use that for specs
14:49:37 <daemontool_> I'm goign to add some other info
14:49:39 <daemontool_> and the structure
14:49:45 <daemontool_> likely the other openstack projects
14:49:49 <daemontool_> explaining also the process
14:49:57 <daemontool_> anything to add to this topic?
14:49:59 <slashme> About the specs
14:50:31 <daemontool_> slashme, yes
14:50:47 <daemontool_> ?
14:50:57 <EinstCrazy> I think we need a sample for this
14:51:02 <daemontool_> EinstCrazy,  yes
14:51:07 <zhangjn> referance https://github.com/openstack/nova-specs
14:51:09 <slashme> What is the process ? Add a spec with general description of a feature/architecture, When it is merged, then write extensive blueprint with how implementation will work ?
14:51:22 <daemontool_> I'll write that in that repo README
14:51:26 <slashme> Or is it the opposite ?
14:51:40 <daemontool_> we'll use the same approach as nova https://github.com/openstack/nova-specs
14:51:53 <daemontool_> before gerrit, merged approved, than launchpad
14:52:05 <daemontool_> from my point of view
14:52:06 <daemontool_> the bp
14:52:10 <daemontool_> as more information it have
14:52:12 <daemontool_> better is
14:52:12 <zhangjn> use sphinx formant and publish to http://specs.openstack.org
14:52:23 <slashme> Okay. +1 to that
14:52:23 <daemontool_> zhangjn,  ++
14:52:32 <daemontool_> I'll add it by EOW
14:52:38 <daemontool_> anything else to dd to this topic?
14:52:50 <daemontool_> next
14:52:53 <daemontool_> #topic  what needs to be done for Mitaka 3?
14:53:00 <daemontool_> tentative>
14:53:03 <daemontool_> 1) rsync
14:53:12 <daemontool_> 2) backup metadata
14:53:19 <daemontool_> 3) cinder backup metadata
14:53:28 <daemontool_> 4) python-freezerclient
14:53:33 <daemontool_> 5) tempest
14:53:38 <daemontool_> sounds?
14:53:40 <slashme> list_backup command
14:53:49 <daemontool_> 6) list_backup
14:53:55 <daemontool_> from the python-freezerclient
14:54:13 <m3m0> 6 is already included
14:55:18 <daemontool_> so
14:55:19 <daemontool_> anything to add
14:55:20 <daemontool_> ?
14:55:26 <daemontool_> moving next
14:55:30 <daemontool_> #topic When are we going to freeze features for Mitaka?
14:56:21 <daemontool_> Friday 18th of March?
14:56:39 <daemontool_> we need to respect http://releases.openstack.org/mitaka/schedule.html
14:57:23 <daemontool_> ok so if there are no objection let's set that date
14:57:25 <daemontool_> anything to add?
14:57:56 <daemontool_> next
14:58:00 <daemontool_> #topic #topic Easier to use devstack installation freezer,Including the tempest test, etc
14:58:17 <daemontool_> zhangjn, ^^ do you have any input on this?
14:58:29 <daemontool_> yangyapeng, ^^?
14:58:32 <daemontool_> I totally agree
14:58:41 <yangyapeng> :)
14:58:46 <daemontool_> :)
14:59:01 <daemontool_> yangyapeng,  do you want to take ownership of this?
14:59:06 <daemontool_> we are running out of time
14:59:15 <daemontool_> let's have this discussion at #openstack-freezer
14:59:23 <zhangjn> OK
14:59:25 <yangyapeng> ok
14:59:40 <zhangjn> have no time to discussion.
14:59:50 <daemontool_> #endmeeting freezer
14:59:57 <slashme> thx daemontool_
15:00:38 <daemontool_> ok meeting is over, thanks all
15:00:53 <ddieterly> ciao
15:00:59 <zhangjn> thx daemontool
15:01:17 <daemontool_> #endmeeting
15:03:40 <ganso> test
15:03:49 <gouthamr> you passed
15:03:54 <ganso> gouthamr: lol
15:03:55 <xyang1> meeting?
15:04:00 <dustins> Glad I wasn't the only paranoid that my IRC client was bugged out
15:04:03 <ganso> thought there was IRC lag
15:04:12 <toabctl> hi
15:04:24 <csaba> hi
15:04:30 <dustins> \o
15:04:32 <toabctl> was the meeting already started ?
15:04:36 <gouthamr> welcome to the freezer meeting
15:04:37 <ganso> toabctl: no
15:04:38 <dustins> nope
15:04:40 <toabctl> ok :-)
15:04:45 <openstack> bswartz: Error: Can't start another meeting, one is in progress.  Use #endmeeting first.
15:04:45 <Yogi1> hi
15:04:48 <toabctl> hi
15:04:48 <aovchinnikov> hi
15:04:48 <cknight> Hi
15:04:55 <ameade> o/
15:04:59 * bswartz slaps openstack around a bit with a large trout
15:05:00 <gouthamr> hello o/
15:05:04 <bswartz> #endmeeting