*** via has joined #openstack-meeting-alt | 01:29 | |
*** yidclare has quit IRC | 01:36 | |
*** rmohan has quit IRC | 01:41 | |
*** rmohan has joined #openstack-meeting-alt | 01:52 | |
*** rmohan has quit IRC | 02:22 | |
*** rmohan has joined #openstack-meeting-alt | 02:23 | |
*** vipul is now known as vipul|away | 03:15 | |
*** esp has joined #openstack-meeting-alt | 03:28 | |
*** esp has left #openstack-meeting-alt | 03:43 | |
*** SergeyLukjanov has joined #openstack-meeting-alt | 03:59 | |
*** cp16net_ has joined #openstack-meeting-alt | 04:16 | |
*** cp16net has quit IRC | 04:16 | |
*** cp16net_ is now known as cp16net | 04:16 | |
*** dmitryme has joined #openstack-meeting-alt | 04:31 | |
*** dmitryme has quit IRC | 04:37 | |
*** dmitryme2 has joined #openstack-meeting-alt | 04:38 | |
*** sacharya has joined #openstack-meeting-alt | 04:50 | |
*** dmitryme2 has quit IRC | 04:50 | |
*** SergeyLukjanov has quit IRC | 04:56 | |
*** via_ has joined #openstack-meeting-alt | 05:32 | |
*** chmouel_ has joined #openstack-meeting-alt | 05:34 | |
*** via has quit IRC | 05:38 | |
*** chmouel has quit IRC | 05:38 | |
*** sacharya has quit IRC | 05:51 | |
*** grapex has quit IRC | 05:54 | |
*** yidclare has joined #openstack-meeting-alt | 06:04 | |
*** vipul|away is now known as vipul | 06:47 | |
*** vipul is now known as vipul|away | 07:21 | |
*** SergeyLukjanov has joined #openstack-meeting-alt | 07:47 | |
*** zykes- has quit IRC | 08:00 | |
*** zykes- has joined #openstack-meeting-alt | 08:01 | |
*** SergeyLukjanov has quit IRC | 08:07 | |
*** SergeyLukjanov has joined #openstack-meeting-alt | 08:14 | |
*** SergeyLukjanov has quit IRC | 08:54 | |
*** SergeyLukjanov has joined #openstack-meeting-alt | 08:56 | |
*** SergeyLukjanov has quit IRC | 09:47 | |
*** SergeyLukjanov has joined #openstack-meeting-alt | 10:23 | |
*** grapex has joined #openstack-meeting-alt | 11:44 | |
*** grapex has quit IRC | 11:44 | |
*** grapex has joined #openstack-meeting-alt | 11:44 | |
*** grapex has quit IRC | 12:14 | |
*** rnirmal has joined #openstack-meeting-alt | 12:26 | |
*** rnirmal_ has joined #openstack-meeting-alt | 12:45 | |
*** rnirmal_ has quit IRC | 12:45 | |
*** rnirmal has quit IRC | 12:49 | |
*** jcru has joined #openstack-meeting-alt | 12:52 | |
*** amyt has joined #openstack-meeting-alt | 12:53 | |
*** rnirmal has joined #openstack-meeting-alt | 13:12 | |
*** via_ is now known as via | 13:26 | |
*** amyt has quit IRC | 13:31 | |
*** sacharya has joined #openstack-meeting-alt | 13:49 | |
*** cloudchimp has joined #openstack-meeting-alt | 13:53 | |
*** cp16net is now known as cp16net|away | 14:13 | |
*** djohnstone has joined #openstack-meeting-alt | 14:13 | |
*** amyt has joined #openstack-meeting-alt | 14:31 | |
*** jcru is now known as jcru|away | 14:48 | |
*** chmouel_ is now known as chmouel | 14:50 | |
*** sacharya has quit IRC | 14:54 | |
*** cp16net|away is now known as cp16net | 15:04 | |
*** jcru|away is now known as jcru | 15:07 | |
*** nikhil has joined #openstack-meeting-alt | 15:08 | |
*** iccha_ has joined #openstack-meeting-alt | 15:23 | |
*** ameade has joined #openstack-meeting-alt | 15:23 | |
*** grapex has joined #openstack-meeting-alt | 15:24 | |
*** jcru is now known as jcru|away | 15:29 | |
*** jcru|away is now known as jcru | 15:32 | |
*** vipul|away is now known as vipul | 15:35 | |
*** sacharya has joined #openstack-meeting-alt | 15:35 | |
*** grapex has left #openstack-meeting-alt | 15:36 | |
*** HenryG has joined #openstack-meeting-alt | 16:01 | |
*** amyt has quit IRC | 16:03 | |
*** amyt has joined #openstack-meeting-alt | 16:03 | |
*** SergeyLukjanov has quit IRC | 16:18 | |
*** yidclare has quit IRC | 16:19 | |
*** bdpayne has quit IRC | 16:46 | |
*** bdpayne has joined #openstack-meeting-alt | 16:47 | |
*** esp has joined #openstack-meeting-alt | 16:51 | |
*** esp has left #openstack-meeting-alt | 16:51 | |
*** EmilienM has quit IRC | 17:01 | |
*** wirehead_ has joined #openstack-meeting-alt | 17:10 | |
*** flaper87 has joined #openstack-meeting-alt | 17:17 | |
*** nkonovalov has joined #openstack-meeting-alt | 17:21 | |
*** cp16net is now known as cp16net|away | 17:25 | |
*** bdpayne has quit IRC | 17:26 | |
*** yidclare has joined #openstack-meeting-alt | 17:29 | |
*** MarkAtwood has joined #openstack-meeting-alt | 17:44 | |
*** sacharya has quit IRC | 17:46 | |
*** dmitryme has joined #openstack-meeting-alt | 17:50 | |
*** SergeyLukjanov has joined #openstack-meeting-alt | 17:50 | |
*** aignatov3 has joined #openstack-meeting-alt | 17:54 | |
*** bdpayne has joined #openstack-meeting-alt | 17:54 | |
SergeyLukjanov | Hey everybody | 18:00 |
---|---|---|
ogelbukh | o/ | 18:00 |
SergeyLukjanov | we will start Savanna project meeting in five minutes | 18:01 |
*** EmilienM has joined #openstack-meeting-alt | 18:01 | |
SergeyLukjanov | #startmeeting savanna | 18:04 |
openstack | Meeting started Thu Apr 4 18:04:27 2013 UTC. The chair is SergeyLukjanov. Information about MeetBot at http://wiki.debian.org/MeetBot. | 18:04 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 18:04 |
*** openstack changes topic to " (Meeting topic: savanna)" | 18:04 | |
openstack | The meeting name has been set to 'savanna' | 18:04 |
SergeyLukjanov | Ok, let's start | 18:04 |
aignatov3 | Hi there. Let's start with agenda | 18:04 |
aignatov3 | 1. Savanna release 0.1a1 is ready. | 18:05 |
aignatov3 | 2. Several documents were updated | 18:05 |
ogelbukh | btw, think about a page with agenda on openstack wiki | 18:06 |
aignatov3 | 3. Improvements in code: config provisioning, updated validation, several bug fixes | 18:06 |
aignatov3 | 4. We published several blueprints | 18:06 |
aignatov3 | on the Launchpad | 18:06 |
SergeyLukjanov | ogelbukh, I think it's good to have such page, but it's not bad to copy agenda to chat ;) | 18:06 |
ogelbukh | sure ) | 18:07 |
aignatov3 | 5. Plan for the nearest future | 18:07 |
SergeyLukjanov | #info Today we have released our first alpha version of Savanna - 0.1a1 | 18:08 |
aignatov3 | And you can download it from http://tarballs.openstack.org/savanna/ | 18:09 |
SergeyLukjanov | #link http://tarballs.openstack.org/savanna/savanna-0.1a1.tar.gz | 18:09 |
aignatov3 | It's installation now is very easy because | 18:10 |
aignatov3 | it uses only "pip install" from tarball | 18:10 |
SergeyLukjanov | #info We updated the following docs - Quickstart, HowToParticipate | 18:12 |
ogelbukh | do you plan to use setuptools-git? | 18:12 |
ogelbukh | (or may be you're already using it) | 18:12 |
*** vipul is now known as vipul|away | 18:13 | |
SergeyLukjanov | ogelbukh, it looks like obsolete dependency | 18:13 |
SergeyLukjanov | ogelbukh, we will check it | 18:13 |
SergeyLukjanov | #info Improvements in code: config provisioning, updated validation, several bug fixes | 18:14 |
aignatov3 | As usual code has been improved: at now Savanna has user config provisionnig | 18:14 |
aignatov3 | so, user is able to define need Hadoop config's parameters for task tracker, data_nodes, name-nodes processes | 18:15 |
aignatov3 | job tracker is well | 18:16 |
aignatov3 | also we added sonme improvements in validation logic | 18:16 |
aignatov3 | Savanna is able to define Nova's available resources | 18:17 |
aignatov3 | sorry, I meant check resources | 18:17 |
aignatov3 | not define | 18:17 |
ogelbukh | which resources you mean? | 18:18 |
aignatov3 | so, Savanna will save you from creationg new clusters with insufficient resources | 18:18 |
ogelbukh | number of cores, free ram or? | 18:18 |
aignatov3 | ram, vcpus, instances | 18:19 |
*** yidclare has quit IRC | 18:19 | |
aignatov3 | we will add resource checking for disks in the future | 18:19 |
aignatov3 | we have fixed several bugs as well | 18:20 |
aignatov3 | you can find them in the main Savanna's launchpad page | 18:20 |
SergeyLukjanov | #info We published several blueprints for the feature tasks | 18:20 |
SergeyLukjanov | #link https://blueprints.launchpad.net/savanna | 18:20 |
SergeyLukjanov | there some several important things that should be implemented asap | 18:21 |
*** yidclare has joined #openstack-meeting-alt | 18:21 | |
SergeyLukjanov | first of all, it's python-savannaclient :) | 18:21 |
SergeyLukjanov | the next one is to improve cluster security by using separated keypairs | 18:22 |
SergeyLukjanov | for different Hadoop clusters | 18:22 |
SergeyLukjanov | additionally, we want to start to support i18n from the first version of Savana | 18:24 |
SergeyLukjanov | #info Our plans for the nearest future | 18:26 |
SergeyLukjanov | #info we are going to finish instructions how to create custom images | 18:26 |
SergeyLukjanov | for different Hadoop versions or OS version, etc. | 18:27 |
*** ruhe has joined #openstack-meeting-alt | 18:28 | |
*** SlickNik has left #openstack-meeting-alt | 18:28 | |
*** SlickNik has joined #openstack-meeting-alt | 18:28 | |
aignatov3 | also we are working on creating Hadoop images on Centos distrs to interop with Savanna | 18:28 |
SergeyLukjanov | #info we are planning to publish Savanna packages for Ubuntu and Centos | 18:29 |
SergeyLukjanov | #info custom Horizon will be published in a few days | 18:30 |
*** rnirmal has quit IRC | 18:32 | |
SergeyLukjanov | and the final item is | 18:32 |
SergeyLukjanov | #info we are doing the final preparations to make devstack working with Savanna | 18:32 |
SergeyLukjanov | I think that's all from our side | 18:33 |
aignatov3 | guys, if you have questions please ask us | 18:34 |
ogelbukh | which version of Ubuntu are you targeting? | 18:35 |
*** ruhe has left #openstack-meeting-alt | 18:36 | |
dmitryme | We are using Ubuntu 12.10 cloud image for Hadoop image | 18:36 |
dmitryme | Our Savanna is deployed with OpenStack on Ubuntu 12.04 | 18:37 |
SergeyLukjanov | we will build packages for Ubuntu 12.04 and Centos 6.3 | 18:38 |
dmitryme | basically I don't see a reason for our code not to work on other versions | 18:38 |
SergeyLukjanov | additionally, I think that we will build packages for Ubuntu 12.10 too | 18:38 |
SergeyLukjanov | but with the lower priority | 18:38 |
dmitryme | Oh, and by the way we are going discuss Savanna on the summit! | 18:42 |
dmitryme | It will take place in Unconference room | 18:43 |
dmitryme | On monday at 11:50 | 18:43 |
*** amyt has quit IRC | 18:43 | |
*** amyt has joined #openstack-meeting-alt | 18:43 | |
SergeyLukjanov | Folks, do you have more questions? | 18:44 |
SergeyLukjanov | If not, I think it's about time to end our meeting | 18:45 |
*** markwash has joined #openstack-meeting-alt | 18:45 | |
aignatov3 | As always, you can mail us in savanna-all@lists.launchpad.net mailing lists and find us in #savanna irc channel | 18:45 |
SergeyLukjanov | #info JFYI you can always use savanna-all@lists.launchpad.net mailing lists and #savanna irc channel to find us and ask your questions | 18:45 |
SergeyLukjanov | #endmeeting | 18:45 |
*** openstack changes topic to "OpenStack meetings (alternate) || Development in #openstack-dev || Help in #openstack" | 18:45 | |
openstack | Meeting ended Thu Apr 4 18:45:53 2013 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 18:45 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-04-04-18.04.html | 18:45 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-04-04-18.04.txt | 18:45 |
openstack | Log: http://eavesdrop.openstack.org/meetings/savanna/2013/savanna.2013-04-04-18.04.log.html | 18:45 |
*** edsrzf has joined #openstack-meeting-alt | 18:52 | |
*** yidclare has quit IRC | 18:52 | |
*** russell_h has joined #openstack-meeting-alt | 18:55 | |
*** brianr-g1ne has joined #openstack-meeting-alt | 18:57 | |
*** yidclare has joined #openstack-meeting-alt | 18:58 | |
*** jcru has quit IRC | 18:58 | |
*** djohnstone1 has joined #openstack-meeting-alt | 18:59 | |
*** vipul|away is now known as vipul | 18:59 | |
*** djohnstone has quit IRC | 18:59 | |
*** kgriffs has joined #openstack-meeting-alt | 19:00 | |
kgriffs | https://wiki.openstack.org/wiki/Meetings/Marconi | 19:01 |
*** jcru has joined #openstack-meeting-alt | 19:02 | |
kgriffs | so, before we get started, just wanted to shout out to everyone who contributed to Grizzly. | 19:02 |
*** cp16net|away is now known as cp16net | 19:03 | |
flaper87 | o/ | 19:03 |
kgriffs | yo | 19:03 |
flaper87 | just in time | 19:03 |
*** malini has joined #openstack-meeting-alt | 19:03 | |
*** bryansd has joined #openstack-meeting-alt | 19:03 | |
kgriffs | Let's give it another minute before we start | 19:04 |
kgriffs | https://wiki.openstack.org/wiki/Meetings/Marconi | 19:04 |
*** dhellmann has quit IRC | 19:04 | |
kgriffs | #startmeeting marconi | 19:05 |
openstack | Meeting started Thu Apr 4 19:05:14 2013 UTC. The chair is kgriffs. Information about MeetBot at http://wiki.debian.org/MeetBot. | 19:05 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 19:05 |
*** openstack changes topic to " (Meeting topic: marconi)" | 19:05 | |
openstack | The meeting name has been set to 'marconi' | 19:05 |
*** dhellmann has joined #openstack-meeting-alt | 19:05 | |
kgriffs | #topic State of the project | 19:05 |
*** openstack changes topic to "State of the project (Meeting topic: marconi)" | 19:05 | |
kgriffs | So, we are close to having our demo ready for Portland. At that point, Marconi will be feature-complete, but will still have a lot of error handling and optimizations to complete. | 19:06 |
kgriffs | Sometime in the next few days we will also have a public sandbox ready that everyone can play with. | 19:07 |
flaper87 | w000000t | 19:07 |
kgriffs | :D | 19:07 |
kgriffs | flaper87 has been making excellent progress on the mongodb storage driver, and we also have a reference driver based on sqlite. | 19:08 |
*** jdprax has joined #openstack-meeting-alt | 19:08 | |
kgriffs | jdprax has been coding away on the client lib | 19:09 |
kgriffs | jdprax: can you comment? | 19:10 |
jdprax | We're still coding on the client library, but our gerrit config was rejected because essentially they want to us to set up pypi now or never for it. | 19:10 |
jdprax | ... | 19:10 |
jdprax | So I'm leaning toward "never", and we'll just push it ourselves. | 19:10 |
flaper87 | jdprax: so, basically we have to release a first version before getting it into gerrit ? | 19:10 |
jdprax | That's my understanding. | 19:10 |
jdprax | :-/ | 19:11 |
jdprax | But hey, not a big deal. | 19:11 |
jdprax | Honestly we've just been swamped so I haven't followed up as closely on it as I should have. | 19:11 |
flaper87 | jdprax: what about pushing some dummy code there as placeholder ? | 19:11 |
flaper87 | I mean, on pypi | 19:11 |
jdprax | Ah, pushing dummy code to pypi? | 19:12 |
flaper87 | jdprax: yeah, some package with version 0.0.0.0.0.0.0.0.0.0 | 19:13 |
flaper87 | .0.0 | 19:13 |
flaper87 | and .0 | 19:13 |
bryansd | .1 | 19:13 |
flaper87 | bryansd: .1/2 ? | 19:13 |
*** dieseldoug has joined #openstack-meeting-alt | 19:13 | |
flaper87 | jdprax: seriously, if that's the problem then I'd say, lets lock tha slot on pypi with some dummy package and get the client on stackforge | 19:14 |
kgriffs | sounds like a plan | 19:14 |
flaper87 | fucking pypi, today is not working at all | 19:15 |
kgriffs | that should be a 7xx error | 19:15 |
*** oz_akan has joined #openstack-meeting-alt | 19:15 | |
jdprax | Hahaha | 19:15 |
kgriffs | ok, moving on… :p | 19:15 |
flaper87 | kgriffs: I'd say 666 | 19:16 |
jdprax | :-) | 19:16 |
*** aignatov3 has quit IRC | 19:16 | |
kgriffs | #topic Finish reviewing the draft API | 19:16 |
*** openstack changes topic to "Finish reviewing the draft API (Meeting topic: marconi)" | 19:16 | |
kgriffs | ok, so over the past couple weeks there've been a bunch of changes to the API, hopefully for the better. | 19:17 |
kgriffs | so, first, is there anything in general you guys want to discuss based on the latest draft? If not, I've got a few specific areas I'd like to focus on. | 19:17 |
kgriffs | https://wiki.openstack.org/wiki/Marconi/specs/api/v1 | 19:18 |
*** aignatov3 has joined #openstack-meeting-alt | 19:19 | |
wirehead_ | So, not to be too annoyingly bikesheddy, kgriffs….. (we talked in person — I'm Ken) I loves that the user-agent isn't overloaded like before, but maybe X-Client-Token instead of Client-ID? | 19:19 |
kgriffs | hey Ken | 19:19 |
flaper87 | wirehead_: I'm afraid that can be a bit confusing for users since there may be other tokes (like keystone's) | 19:20 |
wirehead_ | K | 19:20 |
wirehead_ | Maybe true. Still, would be more HTTP-ish to keep the X- as a prefix. | 19:20 |
wirehead_ | I know I'm bikeshedding and I appologize for it. :) | 19:21 |
kgriffs | actually, I'm not sure I'd agree | 19:21 |
*** vipul is now known as vipul|away | 19:21 | |
*** cyli has joined #openstack-meeting-alt | 19:21 | |
*** fsargent has joined #openstack-meeting-alt | 19:21 | |
kgriffs | x-headers were never supposed to be used like that. | 19:21 |
* kgriffs is looking for the RFC | 19:21 | |
flaper87 | kgriffs <- always has an RFC for everything | 19:21 |
kgriffs | http://tools.ietf.org/html/rfc6648 | 19:22 |
jdprax | For the curious http://en.wikipedia.org/wiki/Bikeshedding | 19:22 |
kgriffs | I'm actually trying to figure out the process for registering new headers | 19:23 |
kgriffs | (seems like Client-ID is generic enough to be useful elsewhere) | 19:23 |
*** woodbon has joined #openstack-meeting-alt | 19:24 | |
wirehead_ | well, if we want to rabbit hole, you could always silently implement it as a cookie. | 19:24 |
kgriffs | (or maybe mnot and his possy will have a better suggestion…TBD) | 19:24 |
kgriffs | oh boy | 19:24 |
russell_h | I'm curious about authorization | 19:24 |
wirehead_ | note that I didn't say "we should implement it as a cookie" | 19:24 |
kgriffs | I'm embarrassed to say the thought did cross my mind... | 19:24 |
kgriffs | russel-h: shoot | 19:25 |
kgriffs | russell_h | 19:25 |
russell_h | kgriffs: any plans to support queue-level permissions? | 19:25 |
russell_h | the spec is a little vague about this | 19:25 |
russell_h | but if you wanted to do this, you would presumably need to track them as a property of the queue | 19:25 |
kgriffs | we have thought about it, and I think it would be great to have, but that would best be implemented in auth middleware | 19:25 |
russell_h | right, but would you have to tell the middleware about the permissions of each queue, or where would that information actually go? | 19:26 |
kgriffs | it would be great if we could expand the Keystone wsgi middleware to support resource-level ACLS | 19:26 |
kgriffs | good question, we honestly haven't talked about it a lot | 19:26 |
wirehead_ | Well, also some sort of "append only user" | 19:27 |
russell_h | does swift have anything like this? | 19:27 |
kgriffs | makes sense | 19:27 |
flaper87 | we haven't talked that much about it but I guess that info will live in the queue | 19:28 |
russell_h | "You can implement access control for objects either for users or accounts using X-Container-Read: accountname and X-Container-Write: accountname:username, which allows any user from the accountname account to read but only allows the username user from the accountname account to write." | 19:28 |
russell_h | not a fan of that | 19:28 |
kgriffs | russell_h: what would you like to see instead? | 19:29 |
flaper87 | or we could also have a PErmissionsController per storage to let it manage per resource permissions | 19:29 |
flaper87 | actually, that sounds like a good idea to my brain | 19:29 |
kgriffs | flaper87: I'm thinking we could add an _acl field to queue metadata | 19:30 |
russell_h | kgriffs: I was hoping someone else had a clever idea | 19:30 |
russell_h | I can't think of anything that doesn't involve describing groups of users | 19:30 |
kgriffs | then, just call out to the controller or whatever as a Falcon hook/decorator | 19:31 |
wirehead_ | I have a repeat of my clever-but-bad-idea: Create anonymous webhooks | 19:31 |
flaper87 | kgriffs: we could but having security data mixed with other things worries me a bit, TBH | 19:31 |
wirehead_ | to push to a queue, hit a URL with a long token | 19:31 |
russell_h | at any rate, I don't think the permissions issue needs to block a v1 API | 19:31 |
wirehead_ | naw | 19:31 |
flaper87 | russell_h: agreed | 19:32 |
flaper87 | sounds like something for v2 and / or Ith release cycle | 19:32 |
kgriffs | russell_h, wirehead_: would you mind submitting a blueprint for that? | 19:32 |
kgriffs | https://blueprints.launchpad.net/marconi | 19:33 |
russell_h | sure, sounds fun | 19:33 |
*** SergeyLukjanov has quit IRC | 19:33 | |
flaper87 | russell_h: thanks!!!!!!!!!!!!!!!!!!!! | 19:33 |
*** SergeyLukjanov has joined #openstack-meeting-alt | 19:34 | |
kgriffs | #action russell_h and wirehead_ to kickstart an auth blueprint | 19:35 |
kgriffs | #topic API - Claiming messages | 19:36 |
*** openstack changes topic to "API - Claiming messages (Meeting topic: marconi)" | 19:36 | |
kgriffs | https://wiki.openstack.org/wiki/Marconi/specs/api/v1#Claim_Messages | 19:36 |
kgriffs | so, any questions/concerns about this section? We haven't had a chance to fully vet this with folks outside the core Marconi team. | 19:37 |
*** DandyPandy has left #openstack-meeting-alt | 19:37 | |
kgriffs | oops - just noticed that id needs to be removed from Query Claim response (just use value of Content-Location header) | 19:38 |
*** aignatov3 has quit IRC | 19:39 | |
* kgriffs fixes that real quick | 19:39 | |
russell_h | so something I'm curious about | 19:40 |
russell_h | hmm, how to phrase this | 19:40 |
*** malini has quit IRC | 19:41 | |
wirehead_ | scud missile time, russell_h. | 19:41 |
russell_h | basically, can a message be claimed twice? | 19:41 |
russell_h | that is poorly phrased | 19:41 |
flaper87 | russell_h: yes if the previous claim expired | 19:41 |
flaper87 | not at the same time | 19:41 |
russell_h | what if I want a message to be processed exactly once by each of 2 types of services | 19:42 |
russell_h | for example if I have a queue, and I want it to be processed both by an archival service and streaming query interface | 19:43 |
russell_h | I'd basically like to be able to specify some sort of token associated with my claim | 19:43 |
russell_h | "make sure no one else with token <russells-query-interface> claims this message" | 19:44 |
flaper87 | russell_h: right now, that's not possible because we don't have routings neither for queues nor for claims | 19:44 |
russell_h | so the eventual intention is that the message would be routed to 2 queues, and claimed there? | 19:44 |
wirehead_ | That seems conceptually simpler to me | 19:44 |
kgriffs | yeah, seems like you could have something that pulls off the firehose and duplicates to two other queues | 19:45 |
kgriffs | alternatively, if they must be done in sequence, worker 1 would post to the second queue the next job to be done by worker 2 | 19:45 |
wirehead_ | Or a submit-to-multiple-queues | 19:45 |
flaper87 | AFAIK, that's something AWS handles in the notification service | 19:45 |
*** oz_akan has quit IRC | 19:45 | |
* flaper87 never mentioned AWS | 19:45 | |
kgriffs | heh | 19:46 |
wirehead_ | Just call it "That Seattle Queue" | 19:46 |
flaper87 | so, that's something we'll add not because AWS does it but it's usefule | 19:46 |
russell_h | I don't like submit-to-multipe-queues idea, I think the point of queueing is to separate the concerns of publishers and consumers | 19:46 |
wirehead_ | Or merely an internal tee | 19:47 |
flaper87 | russell_h: actually the concept behind queues is just queue. It is the protocol itself that adds more functionalities, as for amqp, it addes exchanges, queues, routing_keys and so on | 19:47 |
russell_h | right, I could probably get onboard with that | 19:47 |
flaper87 | I don't like the idea of posting to 2 queues either | 19:47 |
flaper87 | so, what you mentioned is really fair | 19:47 |
russell_h | yeah, I'd really like for this to be something that is up to the consumer | 19:47 |
russell_h | basically "who they are willing to share with" | 19:48 |
kgriffs | #agreed leave routing up to the consumer/app | 19:48 |
flaper87 | just want to add something more | 19:49 |
kgriffs | There's nothing saying we couldn't offer, as part of a public cloud, an add-on "workflow/routing-as-a-service" | 19:49 |
flaper87 | consider that we've added another level that other queuing system may lack of. We also have tenants which adds a higher grouping level for messages, queues and permissions | 19:49 |
kgriffs | but I like the idea of keeping Marconi lean and mean | 19:49 |
kgriffs | right, and another grouping is tags which we are considering adding at some point (limiting to a sane number to avoid killing query performance) | 19:50 |
flaper87 | a solution might be to create more tenants and just use queues as routing spaghettis | 19:50 |
kgriffs | so, the nice thing about Marconi, is queues are very light-weight, so it's no problem to create zillions of them | 19:51 |
kgriffs | …as opposed to That Seattle Notifications Service™ | 19:51 |
flaper87 | concept, consistency and simplicity. Those are some things Marconi would like to keep | 19:51 |
flaper87 | (Marconi told be that earlier today, during lunch) | 19:52 |
kgriffs | wow, he's still alive? That's one ooooold dude! | 19:52 |
flaper87 | kgriffs: was he dead? OMG, I wont sleep tonight | 19:52 |
flaper87 | gazillions > zillions | 19:53 |
* kgriffs Zombie Radio Genius Eats OpenStack Contributor's Brain While He Sleeps | 19:53 | |
flaper87 | and that message was sent through and unknown radio signal | 19:54 |
flaper87 | s/and/an/ | 19:54 |
flaper87 | moving on | 19:54 |
kgriffs | so, you guys can always catch us in #openstack-marconi to discuss claims and routing and stuff further. | 19:54 |
russell_h | the problem with more tenants is that doesn't map well to how people actually use tenants | 19:54 |
russell_h | that can be overcome | 19:54 |
*** nkonovalov has quit IRC | 19:54 | |
kgriffs | sure. | 19:54 |
kgriffs | let's keep the discussion going | 19:54 |
flaper87 | russell_h: agreed, that was just a crazy idea that might work for 2 or 3 types of deployments | 19:55 |
russell_h | flaper87: yeah, I have that idea about every third day for monitoring :) | 19:55 |
russell_h | flaper87: it really doesn't work well for monitoring, because people want the monitoring on their server on the same tenant as the server itself | 19:55 |
russell_h | and they don't do that for servers for some reason | 19:55 |
russell_h | (because their server already exists, and my suggestion that they rebuild it on a different tenant doesn't go over well) | 19:56 |
russell_h | anyway, yeah, joined the other channel | 19:56 |
russell_h | thanks guys | 19:56 |
russell_h | I like the look of this so far | 19:56 |
russell_h | my heart fluttered a little when I saw you using json home ;) | 19:56 |
russell_h | in a good way | 19:56 |
flaper87 | russell_h: thank you. Would love to talk more about that in the other channel | 19:56 |
kgriffs | yeah, we will have the home doc up soon. We want to use uri templates pervasively, but are waiting for the ecosystem around that to mature, so probably do that in v2 of the api | 19:57 |
kgriffs | ok | 19:57 |
kgriffs | we are just about out of time | 19:57 |
kgriffs | any last-minute items? | 19:57 |
kgriffs | oh, one quick thing | 19:57 |
kgriffs | Any objections to postponing the diagnostics (actions resource) to later this year after our first release? | 19:58 |
flaper87 | not from me! I thinki we have other things with higher priority | 19:58 |
*** jbresnah has joined #openstack-meeting-alt | 19:59 | |
kgriffs | #agreed postpone diagnostics | 19:59 |
kgriffs | I really think it will be a hugely helpful feature, but we've got bigger fish to fry first. :D | 19:59 |
flaper87 | I would say a bigger zombie | 20:00 |
kgriffs | ok guys, it's been cool. We'll have a sandbox up soon you can try out. Tell us what sux so we can fix it. | 20:00 |
flaper87 | :P | 20:00 |
*** bcwaldon has joined #openstack-meeting-alt | 20:00 | |
flaper87 | awesome! Way to go guys! russell_h wirehead_ thanks for joining | 20:00 |
kgriffs | FYI, looks like we may be getting celery/kombu support in the near future as well | 20:00 |
kgriffs | thanks guys! | 20:00 |
wirehead_ | thanks for having us, folks :) | 20:00 |
kgriffs | #endmeeting | 20:00 |
flaper87 | w0000t | 20:00 |
*** openstack changes topic to "OpenStack meetings (alternate) || Development in #openstack-dev || Help in #openstack" | 20:01 | |
openstack | Meeting ended Thu Apr 4 20:00:59 2013 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 20:01 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/marconi/2013/marconi.2013-04-04-19.05.html | 20:01 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/marconi/2013/marconi.2013-04-04-19.05.txt | 20:01 |
openstack | Log: http://eavesdrop.openstack.org/meetings/marconi/2013/marconi.2013-04-04-19.05.log.html | 20:01 |
markwash | glance meeting folks around? | 20:01 |
flaper87 | o/ | 20:01 |
bcwaldon | hello! | 20:01 |
*** wirehead_ has left #openstack-meeting-alt | 20:01 | |
bcwaldon | man, its like nobody uses this project | 20:01 |
markwash | do we have jbresnah ? | 20:01 |
flaper87 | hahaha | 20:01 |
jbresnah | i am here | 20:01 |
markwash | cool | 20:01 |
markwash | #startmeeting glance | 20:01 |
openstack | Meeting started Thu Apr 4 20:01:59 2013 UTC. The chair is markwash. Information about MeetBot at http://wiki.debian.org/MeetBot. | 20:02 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 20:02 |
*** openstack changes topic to " (Meeting topic: glance)" | 20:02 | |
bcwaldon | any rackers around? | 20:02 |
openstack | The meeting name has been set to 'glance' | 20:02 |
markwash | unfortunately I scheduled this over top of a racksburg product meeting | 20:02 |
markwash | so nobody from rackspace can make it | 20:02 |
kgriffs | bcwaldon: here | 20:02 |
bcwaldon | ok - next week | 20:02 |
markwash | well, racksburg, that is :-) | 20:02 |
kgriffs | (for the moment - all hands mtg starting soon) | 20:02 |
bcwaldon | kgriffs: I should have been more specific (looking for iccha , brianr, and ameade) | 20:02 |
markwash | so this is our first meeting | 20:03 |
markwash | exciting I know | 20:03 |
bcwaldon | kgriffs: but please stick around :) | 20:03 |
kgriffs | no worries. I'm just eavesdropping :D | 20:03 |
markwash | #topic Glance Blueprints https://blueprints.launchpad.net/glance | 20:03 |
*** openstack changes topic to "Glance Blueprints https://blueprints.launchpad.net/glance (Meeting topic: glance)" | 20:03 | |
markwash | I've been working on cleaning up the blueprint list lately | 20:03 |
*** mtreinish has joined #openstack-meeting-alt | 20:03 | |
markwash | and I'd love to make this meeting the place where we keep up to speed with that | 20:04 |
jbresnah | That would be great IMO | 20:04 |
flaper87 | +1 | 20:04 |
markwash | I've got a list of items to discuss, I'll just start going 1 by 1 | 20:04 |
markwash | with blueprint public-glance | 20:04 |
markwash | https://blueprints.launchpad.net/glance/+spec/public-glance | 20:05 |
markwash | feels like we maybe already support that feature? | 20:05 |
bcwaldon | how so? | 20:05 |
bcwaldon | the use case came from canonical wanting to publish cloud images through a public glance service | 20:06 |
bcwaldon | a readonly service | 20:06 |
markwash | isn't there anonymous auth? | 20:06 |
bcwaldon | the catch is how do you use it through nova | 20:06 |
bcwaldon | and yes, there is anonymous access | 20:06 |
bcwaldon | I will admit we are most of the way there, but the last mile is the catch | 20:07 |
markwash | so, is the feature then that we need nova to support talking anonymously to an arbitrary glance server? | 20:07 |
bcwaldon | I think so | 20:07 |
bcwaldon | now that we have readonly access to glance | 20:07 |
bcwaldon | but this BP was written before that - really it feels like a nova feature | 20:07 |
markwash | so, I'd like for us to investigate to ensure that's the only step left | 20:07 |
bcwaldon | OR | 20:07 |
markwash | and then move this bp to nova | 20:08 |
*** kgriffs has quit IRC | 20:08 | |
bcwaldon | sure - the alternative would be to create a glance driver for glance that you can point to that public glance service with | 20:08 |
markwash | maybe we can touch base with smoser and see what he still wants | 20:08 |
bcwaldon | but maybe you could just use the http backend... | 20:08 |
markwash | anybody wanna take that action? | 20:08 |
bcwaldon | yes, thats probably good | 20:08 |
bcwaldon | we could just summon him here | 20:08 |
markwash | nah, lots to go through, there is riper fruit | 20:09 |
bcwaldon | ok | 20:09 |
bcwaldon | #action bcwaldon to ping smoser on public-glance bp | 20:09 |
markwash | #action markwash investigate remaining steps on bp public-glance and touch base with smoser | 20:09 |
markwash | darnit | 20:09 |
bcwaldon | ! | 20:09 |
markwash | team work | 20:09 |
bcwaldon | ok, lets move on | 20:09 |
markwash | glance-basic-quotas, transfer-rate-limiting | 20:09 |
markwash | https://blueprints.launchpad.net/glance/+spec/glance-basic-quotas https://blueprints.launchpad.net/glance/+spec/transfer-rate-limiting | 20:09 |
markwash | These interact closely with a proposed summit session: http://summit.openstack.org/cfp/details/225 | 20:10 |
flaper87 | Good point, quotas. I think we should get that going for havana. | 20:10 |
markwash | I'd like for iccha_ to put together a blueprint for that umbrella topic | 20:11 |
markwash | and, mark those bps as deps | 20:11 |
markwash | and then put everything in discussion for the summit | 20:11 |
*** kgriffs has joined #openstack-meeting-alt | 20:11 | |
* flaper87 wont be at the summit :( | 20:11 | |
markwash | ah, hmm | 20:12 |
markwash | flaper87: any notes for us now? | 20:12 |
jbresnah | I think that the transfer rate limiting might be a special case | 20:12 |
jbresnah | it needs much lower latency than connection throttling | 20:12 |
markwash | hmm, interesting, how do you mean? | 20:13 |
jbresnah | ie: it cannot call out to a separate service to determine the current rate | 20:13 |
flaper87 | markwash: yep. I've put some thoughts there and I think it would be good to get the quota code used for nova (same code for cinder) and make it generic enough to live in oslo and use that as a base for the implementation | 20:13 |
jbresnah | well... say a tenant has a limit of 1Gb/s total | 20:13 |
markwash | jbresnah: ah I see | 20:13 |
markwash | jbresnah: one thought I had is that most people run glance api workers in a cluster | 20:13 |
jbresnah | so the limit can be taken up front from a 3rd party service | 20:13 |
markwash | so we'd need a way to share usages across workers | 20:13 |
jbresnah | but enforcement has to be done locally | 20:14 |
jbresnah | if that makes sense | 20:14 |
jbresnah | that too | 20:14 |
jbresnah | tho that is harder | 20:14 |
jbresnah | so, in the case i was pointing out | 20:14 |
jbresnah | if they have 1 gb/s | 20:14 |
jbresnah | and they do 1 transfer and it is going at 900mb/s | 20:14 |
jbresnah | they have 100 left | 20:14 |
jbresnah | but if the first drops down to 500... | 20:14 |
jbresnah | then they have 500 | 20:14 |
*** jcru is now known as jcru|away | 20:14 | |
jbresnah | that is all nice, but how do you efficiently do it | 20:14 |
markwash | sounds complicated | 20:15 |
flaper87 | markwash: nova's implementation uses either a local cache or a remote cache (say memcached) | 20:15 |
jbresnah | you cannot reasonably make calls to a third party service ever time you wish to send a buffer | 20:15 |
jbresnah | if you do, all transfer will be quite slow | 20:15 |
jbresnah | a solution to that problem woulc be complicated | 20:15 |
jbresnah | i propose that the limit be set at the begining of the transfer | 20:15 |
markwash | my take is, it sounds complicated and non-glancy, but if it were solved it would be useful for lots of openstack projects | 20:15 |
jbresnah | so the bandwidth is 'checked out' from the quota service | 20:15 |
jbresnah | enforced locally | 20:16 |
*** cp16net is now known as cp16net|away | 20:16 | |
jbresnah | and then chencked back in when done | 20:16 |
jbresnah | that approach is pretty simple i think | 20:16 |
jbresnah | and in the short term, it can just be a global conf setting for all users | 20:16 |
jbresnah | so an admin can say 'no user may transfer faster than 500mb/s', or some such | 20:16 |
markwash | in any case, its something that folks have seen as relevant to making glance a top-level service | 20:17 |
markwash | and we only have 5 slots, so I'd like to frame transfer-rate-limiting as part of that discussion | 20:17 |
markwash | so does that framing sound acceptable? | 20:17 |
jbresnah | i am parsing the meaning of: making glance a top-level service | 20:18 |
markwash | :-) | 20:18 |
jbresnah | but i am ok with making it a subtoipic for sure. | 20:18 |
markwash | Rackspace currently doesn't expose glance directly, only through nova | 20:18 |
jbresnah | ah | 20:19 |
markwash | there are some features they want in order to expose it directly | 20:19 |
* markwash stands in for rackspace since they aren't here atm | 20:19 | |
jbresnah | cool, that makes sense | 20:19 |
flaper87 | makes sense | 20:19 |
markwash | #agreed discuss quotas and rate-limiting as part of making glance a public service at the design summit | 20:19 |
markwash | We have a number of blueprints related to image caching | 20:20 |
markwash | glance-cache-path, glance-cache-service, refactoring-move-caching-out-of-middleware | 20:20 |
markwash | I have some questions about these | 20:20 |
*** MarkAtwood has quit IRC | 20:21 | |
*** dmitryme has quit IRC | 20:21 | |
markwash | 1) re glance-cache-service, do we need another service? or do we want to expose cache management just at another endpoint in the glance-api process | 20:21 |
markwash | or do we want to expose cache management in terms of locations, as with glance-cache-path? | 20:21 |
jbresnah | i vote for the latter | 20:21 |
markwash | flaper87: thoughts? | 20:22 |
flaper87 | I'm not sure. If it'll listen in another port I would prefer to keep them isolated and as such more "controllable" by the user | 20:22 |
jbresnah | i made some comments on that somewhere but i cannot find them... | 20:22 |
flaper87 | what If I would like to stop the cache service? | 20:22 |
flaper87 | I bet most deployments have HA on top of N glance-api's | 20:23 |
bcwaldon | do you think those using the cache depend on the remote management aspect? | 20:23 |
flaper87 | so, stoping 1 at a time wont be an issue but, it doesn't feel right to run 2 services under the same process / name | 20:23 |
markwash | I could see it going either way. . I like the management flexibility of having a separate process, but I think it could cause problems to add more moving parts and more network latency | 20:24 |
markwash | bcwaldon: I'm not sure actually | 20:24 |
jbresnah | how is the multiple locations feature supposed to work? | 20:24 |
jbresnah | to me a cached image is just another location | 20:24 |
markwash | bcwaldon: is there a way to manage the cache without using the v1 api? i.e. does glance-cache-manage talk directly to the local cache? | 20:24 |
jbresnah | it would be nice if when cached it could be registered via that same mechanisms and any other location | 20:25 |
jbresnah | and when cleaned up, removed the same way | 20:25 |
jbresnah | the cache management aspect would then be outside of this scope | 20:25 |
flaper87 | would it? I mean, I don't see that much latency, TBH. I'm thinking about clients pointing directly to the cache holding the chaced image | 20:25 |
flaper87 | or something like that | 20:25 |
bcwaldon | markwash: cache management always talks to the cache local to each glance-api node, and it is only accessible using the /v1 namespace | 20:25 |
markwash | bcwaldon: so even if the api is down, I can manage the cache, right? | 20:26 |
bcwaldon | what does that mean? | 20:26 |
markwash | like, I can ssh to the box and run shell commands to manage the cache. . . | 20:26 |
bcwaldon | no - it uses the public API | 20:26 |
markwash | gotcha, okay | 20:27 |
*** dmitryme has joined #openstack-meeting-alt | 20:27 | |
bcwaldon | ...someone check me on that | 20:27 |
flaper87 | it uses the public api | 20:27 |
flaper87 | AFAIK! | 20:27 |
markwash | So, its probably easy to move from a separate port on the glance-api process, to a separate process | 20:27 |
flaper87 | yeah, the registry client | 20:27 |
flaper87 | i guess, or something like that | 20:27 |
bcwaldon | its not even a separate port, markwash | 20:28 |
markwash | right, I'm proposing that it would be exposed on a separate port | 20:28 |
bcwaldon | ah - current vs proposed | 20:28 |
bcwaldon | om | 20:28 |
bcwaldon | ok | 20:28 |
markwash | its quite a change to, from the api's perspective, treat the cache as a nearby service, rather than a local file resource | 20:29 |
flaper87 | I think it would be cleaner to have a separate project! Easier to debug, easier to maintain and easier to distribute | 20:29 |
flaper87 | s/project/service | 20:29 |
flaper87 | sorry | 20:29 |
bcwaldon | you scared me | 20:29 |
flaper87 | hahahaha | 20:29 |
markwash | :-) | 20:29 |
bcwaldon | I agree | 20:29 |
jbresnah | I do not yet understand the need for a separate interface. | 20:29 |
jbresnah | why not register it as another location | 20:30 |
flaper87 | we already have a separate file glance-cache.conf | 20:30 |
jbresnah | and use the interfaces in place for multiple locations? | 20:30 |
flaper87 | jbresnah: It would be treated like that, (as I imagine it) | 20:30 |
markwash | jbresnah: I think you're right. . its just, to me, there is no place for cache management in image api v2 unless its a special case of locations | 20:30 |
jbresnah | flaper87: can you explain that a bit? | 20:30 |
flaper87 | yep | 20:31 |
flaper87 | so, I imagine that service like this: | 20:31 |
jbresnah | markwash: i would think that when a use went to download an image, the service could check all registered locations, if there is a local one, it could send that. if not it could cache that and then add that location to its list | 20:32 |
flaper87 | 1) it caches a specific image 2) When a request gets to glance-api it checks if that image is cached in some of the cache services. 3) if it is then it points the client to that server for downloading the iamge | 20:32 |
flaper87 | iamge | 20:32 |
jbresnah | any outside admin calls could be done around existing API and access to that store (the filesystem) | 20:32 |
flaper87 | image | 20:32 |
flaper87 | that's one scenario | 20:32 |
flaper87 | what I wanted to say is that it would be, somehow, another location for an image | 20:33 |
jbresnah | flaper87: of course. but in that case, why not give the client a list of locations and let it pick what it can handle? | 20:33 |
jbresnah | flaper87: swift:// file:/// http:// etc | 20:33 |
flaper87 | jbresnah: sure but the client will have to query glance-api anyway in order to obtain that info | 20:33 |
jbresnah | the special case aspect + another process makes me concerned that this is an unneeded complication | 20:33 |
jbresnah | but i can back down | 20:34 |
markwash | flaper87: I'm not sure in that scenario how we populate an image to the cache. . right now you can do it manually, but you can also use it as an MRU cache that autopopulates as we stream data through the server | 20:34 |
flaper87 | so, the client doesn't know if it's cached or not | 20:34 |
jbresnah | flaper87: ?. in that case i do not understand your original scenerio | 20:34 |
flaper87 | mmh, sorry. So, The cache service would serve cached images, right ? | 20:35 |
markwash | we might need to postpone this discussion for a while | 20:35 |
jbresnah | my last point is this: to me this part of glance is a registry replica service. it ideally should be able to handle transient/short term replica registrations without it being a special case | 20:35 |
jbresnah | and it seems that it is close to that | 20:35 |
*** vipul|away is now known as vipul | 20:36 | |
jbresnah | but i do not want to derail work at hand either | 20:36 |
markwash | I don't know that we have enough consensus at this point to really move forward on this front | 20:36 |
jbresnah | code in the hand is worth 2 in the ...bush? | 20:36 |
markwash | true, so I think if folks have patches they want to submit for directional review that would be great | 20:37 |
jbresnah | cool | 20:37 |
markwash | but I'm not comfortable enough to just say "lets +2 the first solution that passes pep8" either :-) | 20:37 |
flaper87 | hahaha | 20:37 |
* markwash is struggling for an action item out of this cache stuff | 20:38 | |
*** vipul is now known as vipul|away | 20:38 | |
*** vipul|away is now known as vipul | 20:38 | |
bcwaldon | markwash: let's get a better overview of the options - I think I can weigh more effectively if I see that | 20:38 |
jbresnah | i could better document up my thoughts and submit them for review? | 20:39 |
bcwaldon | and we can have a more directed discussion | 20:39 |
flaper87 | I'd say we should discuss this a bit further. I mean, I'd agree either with a separate service or with everything embedded in glance-api somehow. What I don't like that much is for this service to listen in another port within glance-api process | 20:39 |
*** SergeyLukjanov has quit IRC | 20:39 | |
jbresnah | informally submit i mean, like email them | 20:39 |
markwash | flaper87: okay, good to know | 20:39 |
markwash | jbresnah: sounds good | 20:39 |
flaper87 | jbresnah: we could create a pad and review both scenarios together | 20:39 |
flaper87 | and see which one makes more sense | 20:40 |
*** SergeyLukjanov has joined #openstack-meeting-alt | 20:40 | |
flaper87 | and then review that either in the summit or the next meeting | 20:40 |
markwash | #action jbresnah, flaper87 to offer more detail proposals for futuer cache management | 20:40 |
markwash | typos :-( | 20:40 |
markwash | one more big issue to deal with that I know of | 20:40 |
markwash | iscsi-backend-store, glance-cinder-driver, image-transfer-service | 20:40 |
bcwaldon | I really don't want to open that can of worms on IRC - this is a big discussion to have at the summit | 20:41 |
markwash | all of these feel oriented towards leveraging more efficient image transfers | 20:41 |
markwash | my proposal is more limited | 20:41 |
markwash | I would like to roll these together for the summit, in as much as they are all oriented towards bandwidth efficienty | 20:41 |
jbresnah | in that case, i'll keep my worms in the can until the summit ;-) | 20:41 |
markwash | s/efficienty/efficiency/ | 20:41 |
*** dmitryme has quit IRC | 20:42 | |
lifeless | oooh bandwidth efficient transfers. | 20:42 |
* lifeless has toys in that department | 20:42 | |
flaper87 | jbresnah: I'll email mines so you can throw them all together | 20:42 |
markwash | I also think the goals of the image-transfer service border on some of the goals of exposing image locations directly | 20:43 |
jbresnah | markwash: i think they may expand into areas beyond BW efficiency, but i am good with putting them all into a topic limited to that | 20:43 |
jbresnah | markwash: i agree | 20:43 |
markwash | cool | 20:43 |
markwash | I'll double check with john griffith and zhi yan liu about what their goals are exactly | 20:44 |
markwash | because it is still possible the conversations are divergent | 20:44 |
markwash | does anybody have any other blueprints they would like to discuss? | 20:44 |
markwash | I have a few more items, but lower importance and less risk of worm-cans | 20:45 |
jbresnah | I have one, but I feel like i am dominating too much of this time already so it can wait | 20:46 |
bcwaldon | open 'em up | 20:46 |
markwash | jbresnah: go for it, not a ton to discuss besides blueprints. . we've been hitting summit sessions on the way I think | 20:46 |
*** sdague has joined #openstack-meeting-alt | 20:47 | |
jbresnah | direct-url-meta-data | 20:48 |
jbresnah | it actually has more to do with multiple-locations i think | 20:48 |
jbresnah | and cross cuts a little to the caching thing... | 20:48 |
markwash | yeah, I think so too | 20:48 |
bcwaldon | man, someone should finish that multiple-locations bp | 20:48 |
markwash | though probably we need to update the image locations spec before we could mark it as superseded | 20:48 |
bcwaldon | #action bcwaldon to expose multiple image locations in v2 API | 20:49 |
flaper87 | hahahaha | 20:49 |
jbresnah | basically the thought is that if you are exposing information from a drvier that you may also need to expose more information than a url for it to be useful | 20:49 |
jbresnah | (i <3 the multiple image locations BP) | 20:49 |
markwash | agreed | 20:49 |
jbresnah | for example, a file url | 20:49 |
markwash | I think locations should become json objects, rather than strings | 20:49 |
jbresnah | that is basically useless | 20:49 |
jbresnah | yeah | 20:49 |
bcwaldon | yep - that's the plan | 20:50 |
jbresnah | with an definition defined by the url scheme | 20:50 |
jbresnah | ok excelent | 20:50 |
flaper87 | +1 | 20:50 |
bcwaldon | we'll establish the API then we can figure out how to internally add that metadata and bubble it out | 20:50 |
jbresnah | then i suppose that blueprint can go away, or be part of the multiplelocations bp | 20:50 |
bcwaldon | no, it should stay - just make it dependent on multiple-locations | 20:50 |
jbresnah | cool | 20:51 |
markwash | hmm, I'd rather mark it as superseded and add some more detail to the multi-locations bp | 20:51 |
markwash | just to reduce the number of bps | 20:51 |
flaper87 | markwash: agreed | 20:51 |
markwash | bcwaldon: would you be okay with me doing that? | 20:51 |
bcwaldon | markwash: that's a silly thing to strive for | 20:51 |
* markwash strives for many silly things | 20:51 | |
bcwaldon | let's make one blueprint called 'features' | 20:51 |
jbresnah | heh | 20:51 |
flaper87 | hahaha | 20:52 |
markwash | :-) | 20:52 |
bcwaldon | we can chat about it - it's a small detail | 20:52 |
bcwaldon | it's a logically different thing to me | 20:52 |
markwash | yeah, in this case, its just that I sort of want object-like urls to appear in the api fully formed like athena from zeus' head | 20:52 |
bcwaldon | and I want to be able to call multiple locations done once we are exposing multiple locations | 20:52 |
flaper87 | bcwaldon: but isn't direct-url-meta-data covered by what will be exposed in m-image-l ? | 20:52 |
markwash | and not have two api revisions for the whole ordeal, just one | 20:53 |
bcwaldon | from my point of view, multiple-image-locations has value completely disregarding backend metadata | 20:53 |
bcwaldon | so we're defining a very specific feature | 20:53 |
markwash | I see | 20:53 |
flaper87 | ok, sounds good | 20:53 |
markwash | I think that's true. . lets still make sure the multi-locations bp has the details we need to be forward compatible with useful metadata | 20:53 |
flaper87 | should we define an action to keep markwash hands off of those blueprints ? | 20:53 |
bcwaldon | only if he actions himself | 20:54 |
bcwaldon | and yes, markwash, we definitely should | 20:54 |
markwash | and then we can rally around some usecases to motivate the metadata aspect | 20:54 |
markwash | #action markwash keep messing with blueprints | 20:54 |
bcwaldon | yeah - I'm interested in the other usecases of that metadata | 20:54 |
bcwaldon | jbresnah: any easy examples in mind? | 20:54 |
jbresnah | the file url | 20:54 |
bcwaldon | well - one could argue that you shouldn't use that store | 20:55 |
jbresnah | so in nove there is a feature that will do a system cp if the direct_url to an image is a file url | 20:55 |
markwash | this could be useful for glance and nova compute workers that share a filesystem like gluster | 20:55 |
bcwaldon | ...oh | 20:55 |
bcwaldon | well I didnt think that through | 20:55 |
jbresnah | but this is sort of a useless feature | 20:55 |
bcwaldon | I get the distributed FS use case | 20:56 |
jbresnah | because you have to assume that nova-compute and glance mount the same fs in the same way | 20:56 |
bcwaldon | true | 20:56 |
jbresnah | so info like NFS exported host, or some generic namespace token would be good | 20:56 |
markwash | jbresnah: that does make me wonder if we need some new fs driver that is a "shared fs" driver | 20:56 |
jbresnah | so that the services could be preconfigured with some meaning | 20:56 |
jbresnah | markwash: maybe, i haven't really thought aobut how that would help yet | 20:56 |
markwash | or maybe its just optional metadata that goes straight from local configuration of the fs store to the metadata location | 20:56 |
*** yidclare has quit IRC | 20:57 | |
markwash | s/metadata location/location metadata/ | 20:57 |
flaper87 | markwash: I would say the later | 20:57 |
flaper87 | sounds like something up to the way the whole thing is configured | 20:57 |
markwash | jbresnah: were you thinking along the lines of the latter as well? | 20:57 |
flaper87 | and not the implementation itself | 20:57 |
jbresnah | nod | 20:57 |
markwash | okay cool, I like that | 20:57 |
markwash | so that's a great use case, we should find some more b/c I think they may be out there | 20:58 |
markwash | we're about out of time, any last minute items? | 20:58 |
flaper87 | yep | 20:58 |
bcwaldon | only to say thank you for organizing this, markwash | 20:58 |
flaper87 | what about doing bug squashing days from time to time ? | 20:58 |
markwash | flaper87: could be great | 20:58 |
flaper87 | markwash: indeed, thanks! It's really useful to have this meetings | 20:58 |
bcwaldon | my only reservation would be our low volume of bugs | 20:58 |
bcwaldon | relative to larger projects that have BS days | 20:59 |
markwash | Do we want to have another meeting next week before the summit? | 20:59 |
flaper87 | yep, that's why I was thinking of it like something we do 1 per month | 20:59 |
flaper87 | or something like that | 20:59 |
jbresnah | yeah this was great, thanks! | 20:59 |
bcwaldon | ok | 20:59 |
bcwaldon | markwash: yes please | 20:59 |
flaper87 | markwash: +1 | 20:59 |
markwash | #action markwash to look at 1/month bugsquash days | 20:59 |
*** jcru|away is now known as jcru | 20:59 | |
markwash | #action markwash to schedule an extra glance meeting before the summit | 21:00 |
markwash | thanks guys, we're out of time | 21:00 |
markwash | #endmeeting | 21:00 |
*** openstack changes topic to "OpenStack meetings (alternate) || Development in #openstack-dev || Help in #openstack" | 21:00 | |
openstack | Meeting ended Thu Apr 4 21:00:15 2013 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 21:00 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/glance/2013/glance.2013-04-04-20.01.html | 21:00 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/glance/2013/glance.2013-04-04-20.01.txt | 21:00 |
openstack | Log: http://eavesdrop.openstack.org/meetings/glance/2013/glance.2013-04-04-20.01.log.html | 21:00 |
flaper87 | \o/ bye guys! | 21:00 |
bcwaldon | seeya | 21:00 |
jbresnah | wave | 21:01 |
*** jbresnah has left #openstack-meeting-alt | 21:01 | |
*** bryansd has left #openstack-meeting-alt | 21:01 | |
*** flaper87 has left #openstack-meeting-alt | 21:01 | |
*** djohnstone has joined #openstack-meeting-alt | 21:01 | |
*** yidclare has joined #openstack-meeting-alt | 21:02 | |
*** djohnstone1 has quit IRC | 21:05 | |
*** amyt has quit IRC | 21:09 | |
*** amyt has joined #openstack-meeting-alt | 21:09 | |
*** cp16net|away is now known as cp16net | 21:10 | |
*** cloudchimp has quit IRC | 21:14 | |
*** MarkAtwood has joined #openstack-meeting-alt | 21:17 | |
*** rmohan has quit IRC | 21:23 | |
*** rmohan has joined #openstack-meeting-alt | 21:25 | |
*** yidclare has quit IRC | 21:29 | |
*** yidclare has joined #openstack-meeting-alt | 21:31 | |
*** mtreinish has quit IRC | 21:34 | |
*** dieseldoug has quit IRC | 21:41 | |
*** malini has joined #openstack-meeting-alt | 21:49 | |
*** amyt_ has joined #openstack-meeting-alt | 21:52 | |
*** amyt_ has quit IRC | 21:52 | |
*** amyt_ has joined #openstack-meeting-alt | 21:53 | |
*** amyt has quit IRC | 21:53 | |
*** amyt_ is now known as amyt | 21:53 | |
*** jdprax has quit IRC | 21:55 | |
*** malini has left #openstack-meeting-alt | 21:57 | |
*** djohnstone has quit IRC | 22:00 | |
*** sacharya has joined #openstack-meeting-alt | 22:01 | |
*** yidclare has quit IRC | 22:07 | |
*** yidclare has joined #openstack-meeting-alt | 22:09 | |
*** amyt_ has joined #openstack-meeting-alt | 22:24 | |
*** amyt has quit IRC | 22:24 | |
*** amyt_ is now known as amyt | 22:24 | |
*** ogelbukh has quit IRC | 22:25 | |
*** amyt has quit IRC | 22:32 | |
*** amyt has joined #openstack-meeting-alt | 22:32 | |
*** kgriffs has quit IRC | 22:40 | |
*** woodbon has quit IRC | 22:48 | |
*** sdake_ has quit IRC | 23:03 | |
*** sdake_ has joined #openstack-meeting-alt | 23:03 | |
*** jcru has quit IRC | 23:07 | |
*** amyt has quit IRC | 23:30 | |
*** markwash has quit IRC | 23:47 | |
*** rmohan has quit IRC | 23:50 | |
*** rmohan has joined #openstack-meeting-alt | 23:51 | |
*** MarkAtwood has quit IRC | 23:53 | |
*** HenryG_ has joined #openstack-meeting-alt | 23:53 | |
*** HenryG has quit IRC | 23:56 | |
*** rmohan has quit IRC | 23:57 | |
*** rmohan has joined #openstack-meeting-alt | 23:57 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!