*** dschroeder has quit IRC | 00:01 | |
*** daemontool has quit IRC | 00:08 | |
*** daemontool has joined #openstack-freezer | 00:08 | |
daemontool | vannif, ++ python update.py /opt/stack/freezer-api | 00:30 |
---|---|---|
daemontool | Version change for: falcon, jsonschema, keystonemiddleware, oslo.config, oslo.i18n | 00:30 |
daemontool | Updated /opt/stack/freezer-api/requirements.txt: | 00:30 |
daemontool | falcon>=0.1.6 -> falcon>=0.1.6,<0.2.0 | 00:30 |
daemontool | jsonschema>=2.0.0,<3.0.0,!=2.5 -> jsonschema>=2.0.0,<3.0.0 | 00:30 |
daemontool | keystonemiddleware>=2.0.0,!=2. -> keystonemiddleware>=1.5.0,<1.6.0 | 00:30 |
daemontool | oslo.config>=2.3.0 # Apache-2. -> oslo.config>=1.9.3,<1.10.0 # Apache-2.0 | 00:30 |
daemontool | oslo.i18n>=1.5.0 # Apache-2.0 -> oslo.i18n>=1.5.0,<1.6.0 # Apache-2.0 | 00:30 |
daemontool | 'pytest' is not in global-requirements.txt | 00:30 |
daemontool | 'pytest-cov' is not in global-requirements.txt | 00:30 |
daemontool | 'pytest-xdist' is not in global-requirements.txt | 00:30 |
daemontool | Version change for: coverage, flake8, mock | 00:30 |
daemontool | Updated /opt/stack/freezer-api/test-requirements.txt: | 00:30 |
daemontool | coverage -> coverage>=3.6 | 00:30 |
daemontool | flake8>=2.2.4,<=2.4.1 -> flake8==2.2.4 | 00:31 |
daemontool | mock>=1.2 -> mock>=1.0,<1.1.0 | 00:31 |
daemontool | Traceback (most recent call last): | 00:31 |
daemontool | File "update.py", line 270, in <module> | 00:31 |
daemontool | main() | 00:31 |
daemontool | File "update.py", line 253, in main | 00:31 |
daemontool | stdout, options.verbose, non_std_reqs) | 00:31 |
daemontool | File "update.py", line 266, in _do_main | 00:31 |
daemontool | project.write(proj, actions, stdout=stdout, verbose=verbose) | 00:31 |
daemontool | File "/opt/stack/requirements/openstack_requirements/project.py", line 180, in write | 00:31 |
daemontool | raise Exception("Error occured processing %s" % (project['root'])) | 00:31 |
daemontool | Exception: Error occured processing /opt/stack/freezer-api | 00:31 |
*** zhonghua-lee has quit IRC | 01:55 | |
*** zhonghua-lee has joined #openstack-freezer | 01:56 | |
*** DuncanT has quit IRC | 02:01 | |
*** c00281451 has joined #openstack-freezer | 02:02 | |
*** DuncanT has joined #openstack-freezer | 02:04 | |
*** c00281451_ has joined #openstack-freezer | 03:07 | |
*** c00281451 has quit IRC | 03:09 | |
*** memogarcia has quit IRC | 04:22 | |
*** memogarcia has joined #openstack-freezer | 06:21 | |
*** memogarcia has quit IRC | 06:25 | |
*** openstackgerrit has quit IRC | 08:47 | |
*** openstackgerrit has joined #openstack-freezer | 08:47 | |
*** reldan has joined #openstack-freezer | 08:48 | |
Slashme | @daemontool You were looking for this a few days ago: https://www.openstack.org/summit-login/login?BackURL=/summit/austin-2016/call-for-speakers/&awesm=awe.sm_aNCPT | 08:50 |
*** reldan has quit IRC | 09:31 | |
*** reldan has joined #openstack-freezer | 09:50 | |
*** reldan has quit IRC | 09:56 | |
daemontool | Slashme, ty, we should think about the talk | 09:58 |
*** daemontool has quit IRC | 10:16 | |
*** daemontool has joined #openstack-freezer | 10:26 | |
*** reldan has joined #openstack-freezer | 10:30 | |
daemontool | vannif, we need to totally migrate freezer-api to testr | 10:55 |
openstackgerrit | Fausto Marzi proposed openstack/freezer-api: Switch to testr from pytest https://review.openstack.org/260950 | 10:56 |
vannif | ok. it seems you found me a hobby for the Christmas holidays ;) | 10:58 |
daemontool | haha | 11:00 |
daemontool | vannif, that's | 11:00 |
daemontool | that commit | 11:00 |
daemontool | should fix that | 11:00 |
daemontool | but there's an issue | 11:01 |
daemontool | http://logs.openstack.org/50/260950/1/check/gate-freezer-api-python27/d483519/console.html | 11:01 |
daemontool | well, there are multiple issues in the ci | 11:02 |
daemontool | but our issue is reproducible if you use tox -r -v locally on that patchset | 11:03 |
daemontool | and by other side | 11:03 |
daemontool | with devstack | 11:03 |
daemontool | as we still use pytest | 11:03 |
daemontool | at build time, it returns an error | 11:04 |
daemontool | at pytest is not in the global requirements | 11:04 |
daemontool | vannif, Slashme are we going to keep using apscheduler and pepdaemon? | 11:04 |
daemontool | if yes it's ok, but I need to add them to the global requirements of mitaka | 11:05 |
daemontool | as soon as possible | 11:05 |
*** emildi has quit IRC | 11:05 | |
daemontool | reldan, ^^ | 11:05 |
reldan | Hi | 11:06 |
reldan | Are you about pytest? | 11:06 |
daemontool | reldan, I'm removing pytest from freezer-api | 11:06 |
daemontool | but it's quite easy as it is not used in the test code | 11:06 |
daemontool | we need only to do the boilerplate to use testr/subunit | 11:07 |
daemontool | vannif, is looking at that | 11:07 |
reldan | Oh, great. How I can help? | 11:07 |
daemontool | what I'd like to understand now is about | 11:07 |
daemontool | apscheduler and pepdaemon | 11:07 |
daemontool | cause they are not in the global requirements | 11:07 |
daemontool | so if we keep using them | 11:07 |
daemontool | we have to add them to the global-requirements.txt of Mitaka | 11:08 |
daemontool | otherwise if we build freezer in devstack automatically | 11:08 |
daemontool | it will throw an error | 11:08 |
daemontool | because that modules are not part of the global-requirements | 11:08 |
vannif | pepdaemon can be replaced quite easily. but the scheduler would take more time. do you know of any other library altready listed in OS requirements, which provides similar functionalities ? | 11:09 |
daemontool | szaher, what about https://review.openstack.org/#/c/239905/ ? | 11:09 |
daemontool | I don't know | 11:09 |
daemontool | but if we need them | 11:09 |
daemontool | it's not a big deal | 11:09 |
daemontool | we just add it to the globals | 11:09 |
daemontool | and we are grand | 11:09 |
*** emildi has joined #openstack-freezer | 11:09 | |
vannif | also, what is better: reinvent the wheel, or add a library to the requirements ? | 11:09 |
daemontool | add a library | 11:10 |
daemontool | to req | 11:10 |
vannif | then add the libraries ^^ | 11:10 |
daemontool | ok | 11:10 |
daemontool | now, another issue is that freezer-web-ui currently does not build on devstakc | 11:10 |
daemontool | but this is not a high priority issue now | 11:11 |
daemontool | today I'd like to merge the parallels | 11:11 |
daemontool | I just have to finish to test 2 ssh and 2 swift | 11:11 |
daemontool | and I'll give +2 | 11:11 |
daemontool | reldan, ^^ | 11:12 |
reldan | Amazing ) | 11:12 |
daemontool | and please | 11:12 |
reldan | Thank you! | 11:12 |
daemontool | review this https://review.openstack.org/#/c/259905/ | 11:12 |
daemontool | vannif, Slashme reldan szaher frescof | 11:13 |
reldan | Doing it | 11:13 |
daemontool | I can't recall for what topic we where needing a bp, other than tenant based backups.... anyone can help me? | 11:14 |
daemontool | reldan, was it the metadata for something? | 11:14 |
reldan | I remember that it should be multiregion tenant backup | 11:15 |
reldan | http://eavesdrop.openstack.org/irclogs/%23openstack-freezer/%23openstack-freezer.2015-12-17.log.html | 11:18 |
reldan | Yes, let’s imagine that we have cindernative in two regions | 11:18 |
reldan | so we should have some format of backup that describes all cinedernatives in different regions | 11:18 |
daemontool | reldan, that-s right | 11:20 |
daemontool | so it's like similar to what is written + managing multiple openstack client object? | 11:21 |
daemontool | after all multi region are two or more client objects | 11:22 |
daemontool | is more or less my assumption correct? | 11:22 |
reldan | You know it is more about tenant backup | 11:23 |
reldan | I have read your blueprint, it is good. But something is not clear for me | 11:23 |
reldan | For example - public IP | 11:23 |
daemontool | ok write it also there please | 11:23 |
daemontool | I think the public IP probably has to be the same | 11:24 |
daemontool | because we backup something in a specific state with taht settings | 11:24 |
daemontool | and then we restore that | 11:24 |
reldan | Yes, but stored IP can be already taken by someone else | 11:24 |
reldan | And we cannot assign it | 11:24 |
daemontool | well that can happen anyway | 11:25 |
reldan | Ot we are trying to restore our tenant in different installation | 11:25 |
daemontool | regardless of the backup I think | 11:25 |
daemontool | yes | 11:25 |
daemontool | I see that | 11:25 |
daemontool | probably at first isntance | 11:25 |
daemontool | the ip is the same | 11:25 |
Slashme | I think as much as possible, the restore should be idempotent, so if there is a problem, you can correct it and re-try the restore. | 11:25 |
reldan | Yes, but what we should do - try to assign new ones or don’t do anything? | 11:26 |
daemontool | at first instance we should say it in the logs, api, web ui and do nothing | 11:26 |
daemontool | at least this is my opinion | 11:27 |
daemontool | I think this is a bit different than backup and restore | 11:27 |
daemontool | even though is brilliant | 11:28 |
reldan | Ok, fine. The second question - if we are doing cindernative our backup is stored in swift container in the same region. And it will be impossible to restore, if we loose our swift in that region. | 11:28 |
daemontool | cause now we are not thinking about restoring what we back up | 11:28 |
daemontool | but we are thinking about how to restore the data and change settings | 11:28 |
daemontool | so it would be, how do we want the service looks like after we restore our data | 11:28 |
daemontool | right? | 11:29 |
reldan | Yes, you are right. How it should be stored | 11:29 |
daemontool | it's interesting and useful, but I'd say, let's first focus on restore the exact same thing | 11:29 |
daemontool | then we can find a way to define what to change pre/post backup | 11:29 |
daemontool | it's a complex matter | 11:30 |
reldan | Or let’s say swift backup (I mean backup of containers). We are going to compress it and encrypt? | 11:30 |
daemontool | reldan, I agree with your second question | 11:30 |
daemontool | that's data availability | 11:30 |
daemontool | I think we should yes | 11:30 |
reldan | But it can be petabytes of data in billions of files and we have no fast mechanism of doing diff | 11:31 |
daemontool | I agree that's a good point and a different challenge | 11:31 |
reldan | And if we would like to implement backups on per day basis - it will be just crazy | 11:31 |
daemontool | anyway | 11:31 |
daemontool | I agree | 11:31 |
daemontool | but that's tenants | 11:31 |
daemontool | it is unlikely that one tenant have billions of files and petabytes | 11:32 |
daemontool | I don't think anyone would do that | 11:33 |
daemontool | but let's say a user wants to do that every 4 months | 11:33 |
daemontool | we still have the same challenge | 11:33 |
reldan | Agree | 11:34 |
reldan | We can arrange meeting | 11:35 |
daemontool | I think we should probably focus now on getting that first available | 11:36 |
daemontool | with basic features | 11:36 |
daemontool | yes | 11:36 |
reldan | Yes, it is great that we have a blueprint now. Probably we should discuss how to split this big task on several small. And then mark every small task as “ready to implement”, “should be clarified this and this” | 11:37 |
daemontool | ok | 11:37 |
daemontool | ++ | 11:37 |
daemontool | Slashme, sounds for you? | 11:37 |
Slashme | My only concern is the targeted release. I don't think mitaka is realistic, even for a basic feature. | 11:38 |
daemontool | Slashme, I agree | 11:39 |
Slashme | Apart from that, the feature is great, there are lots of use-cases and potential interested users I think. | 11:40 |
*** reldan has quit IRC | 11:41 | |
daemontool | Slashme, well if we can have anything by end of Febrary | 11:43 |
daemontool | we have like 2 full months | 11:43 |
*** reldan has joined #openstack-freezer | 11:57 | |
*** openstackgerrit has quit IRC | 12:17 | |
*** openstackgerrit has joined #openstack-freezer | 12:18 | |
*** memogarcia has joined #openstack-freezer | 12:30 | |
daemontool | reldan, now that I'm thinking about it, we can't do incremental for swift objects | 13:32 |
daemontool | cause there's no way to write a piece of object | 13:32 |
daemontool | to swift | 13:33 |
daemontool | we can only use timestamp difference | 13:33 |
reldan | But probably it is possible to do not block based, but file-based incremental | 13:33 |
daemontool | and if the timestamp changed, we backup it | 13:33 |
daemontool | yes | 13:33 |
daemontool | object based | 13:33 |
reldan | But I don’t know any fast way to do this, instead of listing every container | 13:33 |
reldan | And it may be quite slow | 13:34 |
reldan | even if we have no changes at all, we should list every single object in swift | 13:34 |
daemontool | we can retrieve 10k object per time | 13:34 |
daemontool | per request | 13:34 |
daemontool | I mean, we can list 10k objects per request | 13:35 |
reldan | Yes, but usually I use swift for storing hundrets of milliions objects | 13:36 |
reldan | Like I did at my previous work | 13:36 |
reldan | We can calculate average traffic | 13:36 |
reldan | but usually cloud providers charge for bandwidth as well | 13:37 |
reldan | so it may be very expensive | 13:37 |
daemontool | generally the traffic within the cloud is not billed | 13:39 |
daemontool | if we move the object outside yes | 13:39 |
daemontool | but that's part of the intrinsic cost of the backups | 13:40 |
daemontool | you have to move the data anyway | 13:40 |
daemontool | also this is public cloud specific | 13:40 |
daemontool | for private cloud deployments costs are different | 13:40 |
reldan | I suppose traffic between vm is not billed, but traffic between vm and swift should be | 13:40 |
daemontool | I don't know | 13:41 |
reldan | Anyway, let’s don’t consider price for now | 13:43 |
reldan | For listing we should get file_name, file_size and attributes | 13:43 |
reldan | Let’s say it should be 100 bytes per file | 13:43 |
reldan | And we have 1 mln files | 13:44 |
daemontool | and timestamp | 13:44 |
reldan | So we need something around 100 mb traffic only for listing | 13:44 |
daemontool | I think it's more | 13:44 |
daemontool | at least 200b of meta data per file | 13:45 |
daemontool | for 1 milion objects | 13:45 |
reldan | Then 200 mb per 1 mln objects for metainformation | 13:45 |
reldan | if we have 10k objects per request, we should invoke it 100 times, each partition will be 2 mb | 13:46 |
reldan | If user have 10 millions, it will be 2GB of metainformaton | 13:47 |
daemontool | yes that's the math | 13:47 |
reldan | Not fast, but seems doable. But there are may be a good question | 13:49 |
reldan | What if we have writes during the backup | 13:49 |
reldan | So we can some objects write two times, some objects no one time | 13:49 |
reldan | Or they have paging? | 13:50 |
reldan | with id | 13:50 |
daemontool | swift supports paging | 13:51 |
daemontool | but there's no snapshot feature in there | 13:51 |
daemontool | so that's a question more for swift like | 13:51 |
daemontool | what happen is some write an object while concurrently someone else | 13:51 |
daemontool | read it? | 13:52 |
reldan | Yes | 13:53 |
daemontool | I don't know | 13:53 |
reldan | And additional question. If we would like to support encryption for swift and compressing. | 13:54 |
daemontool | we should look for object concurreny on swift | 13:54 |
daemontool | I was taking a look, the trasnfer can be compressed with swift | 13:54 |
daemontool | I think we should | 13:54 |
reldan | Should we do it per file or per all container | 13:54 |
reldan | whole | 13:54 |
daemontool | for the incremental | 13:54 |
daemontool | I think it should be per object | 13:54 |
reldan | in this case we should obfuscate names | 13:55 |
daemontool | in the meta data_ | 13:55 |
daemontool | ? | 13:55 |
daemontool | we can just encrypt the matadata too | 13:55 |
reldan | yes, but if we do compression per file basis, we have a lot of small archives | 13:56 |
reldan | and we should be sure that no one can read original names | 13:56 |
reldan | if we do compression per object - it means we have a lot of small compressions, and it may be not very usefull | 13:57 |
daemontool | yes but then if you need to restore | 13:57 |
daemontool | one object, then you need to restore one container, or the whole thing | 13:57 |
reldan | You are right | 13:58 |
daemontool | I think the metadata is one single blob | 13:58 |
daemontool | compressed/encrypted | 13:58 |
daemontool | and the compressed/encrypted objects | 13:58 |
daemontool | are per object | 13:58 |
daemontool | the metadata would be like tar metadata | 13:58 |
reldan | yes, with obfuscated names | 13:58 |
daemontool | yes with | 13:58 |
daemontool | the metadata encrypted | 13:59 |
daemontool | (which I'm not sure if we have that now) | 13:59 |
daemontool | can't remember | 13:59 |
reldan | Nope, now metadata is not encrypted | 13:59 |
reldan | and not compressed | 13:59 |
daemontool | ok | 13:59 |
*** emildi has quit IRC | 15:02 | |
*** daemontool has quit IRC | 15:29 | |
*** daemontool has joined #openstack-freezer | 15:42 | |
*** dschroeder has joined #openstack-freezer | 16:22 | |
*** reldan has quit IRC | 16:59 | |
*** reldan has joined #openstack-freezer | 17:01 | |
*** daemontool has quit IRC | 17:02 | |
*** reldan has quit IRC | 18:02 | |
*** reldan has joined #openstack-freezer | 18:28 | |
*** reldan has quit IRC | 18:32 | |
*** reldan has joined #openstack-freezer | 18:33 | |
*** reldan has quit IRC | 18:37 | |
*** reldan has joined #openstack-freezer | 18:56 | |
*** zhonghua-lee has quit IRC | 19:27 | |
*** zhonghua-lee has joined #openstack-freezer | 19:28 | |
*** zhonghua-lee has quit IRC | 19:29 | |
*** dschroeder has quit IRC | 19:45 | |
*** reldan has quit IRC | 20:48 | |
*** reldan has joined #openstack-freezer | 20:50 | |
*** reldan has quit IRC | 21:32 | |
*** reldan has joined #openstack-freezer | 22:08 | |
*** openstack has joined #openstack-freezer | 22:23 | |
*** reldan has quit IRC | 22:34 | |
*** openstack has joined #openstack-freezer | 22:38 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!