22:00:03 #startmeeting reddwarf 22:00:04 Meeting started Tue Feb 19 22:00:03 2013 UTC. The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:00:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:00:06 sup 22:00:07 The meeting name has been set to 'reddwarf' 22:00:34 #link https://wiki.openstack.org/wiki/Meetings/RedDwarfMeeting 22:00:40 work 22:00:44 #link http://eavesdrop.openstack.org/meetings/reddwarf/2013/reddwarf.2013-02-12-22.00.html 22:00:46 hello 22:00:56 ok time to chill for a sec to let people come in 22:01:01 woot 22:01:07 good afternoon 22:01:07 hai 22:02:05 * hub_cap creates some dummy BPs 22:02:20 before I forget I want to point out that the reason python-reddwarfclient dependencies pull in the old version is a pypi mirror somewhere has that version cached. you can recreate with `pip install -M -U python-reddwarfclient` 22:02:52 howdy 22:02:53 nice. thx clarkb! 22:03:06 clarkb: how can we get jenkins to do this? 22:03:12 oh, thank you. did someone already ask you about that today? 22:03:18 whisper sweet nothings into jenkins ear vipul 22:03:22 datsun180b: no not yet 22:03:35 one work around is to pin the dependency on the 0.1.1 version in your pip-requires 22:03:47 then when the mirrors stop caching the old stuff you can remove that pin 22:03:53 That seems doable 22:04:01 that makes sense 22:04:18 Currently we submit Reddwarf changes that we know will fail anyway until client gets merged in, so that wouldn't be much worse in the short term 22:04:35 okey lets get to the meeting topics 22:04:45 #topic Update to Action items 22:05:11 ive created 2 blueprints in each of our 3 repos just fyi. feel free to use them vipul & co 22:05:22 hub_cap: tnx 22:05:27 okay, thanks… 22:05:32 was one/two of those for percona ? 22:05:43 kagan: they just say dummybpX 22:05:46 feel fre to rename them 22:05:52 kagan: i see you're using your real name now 22:05:55 and that's how i find them? by name ? 22:05:59 yep 22:06:02 I've updated the wiki clarifying that we're implementing sec-groups as extensions to reddwarf. 22:06:07 https://blueprints.launchpad.net/reddwarf-integration 22:06:12 wasn't that different to begin with. more a decoration than a nick ... 22:06:12 u can see them there 22:06:14 #link https://wiki.openstack.org/wiki/Reddwarf-security-groups 22:06:27 ok 22:06:44 Was just in the process of updating the blueprint as well. 22:06:50 nice SlickNik, /aside, i love the new wiki format 22:07:06 #agreed 22:07:16 Yeah, I was wondering what happened and noticed that the wiki level'ed up. :) 22:07:24 it ate a mushroom 22:07:25 word! 22:07:44 hmmmm next one, hub_cap needs to not be so lame..... i didnt accomplish this one 22:08:03 ps ive updated the rate-limits BP but forgot ot update quotas, ill do that during the meeting 22:08:28 SlickNik: i see youve added stevedore to the possibilities in that BP, have u looked @ yet? 22:08:35 I started looking into stevedore to see how we can use them for extensions, still figuring things out there. 22:08:41 the filter scheduler in cinder is a good starting point 22:08:48 np. the rate limts one still needs to be blessed 22:08:50 all of the filters are extension points 22:08:54 So not complete, and will action myself to continue on that. 22:08:55 esp: ill bless it now 22:09:18 #action SlickNik still looking at stevedore for entry points to extensions/ https://github.com/dreamhost/stevedore 22:09:23 esp: done 22:09:28 thx. I do have a question for you and the group on this but we can circle back on it after the agenda. 22:09:53 SlickNik: https://github.com/openstack/cinder/blob/master/cinder/openstack/common/scheduler/filter.py 22:09:58 thats where its imported 22:10:05 esp: okey 22:10:32 thx hub_cap 22:10:34 so as for teh api markdown. im starting it tomorrow. its my task now that im done w/ cinder multi volume 22:10:52 hub_cap: is this supposed to give us more of selective extensions? 22:10:56 as oppsoed to all/nothng 22:10:59 vipul: ya 22:11:06 u can define them in the egg 22:11:16 if i havnt said it enough... great job hub_cap 22:11:29 lulz cp16net 22:11:34 if anyone is interested (https://github.com/openstack/cinder/commit/6c708d12f58eb20fce6733f1f6fd08d978570775) 22:11:37 vipul: yeah, that's the idea. 22:11:40 #info http://stevedore.readthedocs.org/en/latest/ 22:11:49 ooh multi volumes 22:11:50 shiny 22:12:03 nice! 22:12:08 yup vipul, and its done. so im working on the markdown for the api, as of tomrrow morning 22:12:23 i guess "as of tomorrow" doesnt make sense 22:12:34 what's the usecase fo multi-volumes in redwarf? 22:12:47 or is this just a tangentential thing 22:13:07 well.. 22:13:22 u can have different backends for different service types, high iops for users who need it, etc.. 22:13:37 somewhat tangental but its leveragable 22:14:03 SlickNik: hows that automated fandangled reddwarf client 22:14:04 k i shoulud probably read bp 22:14:04 I've acquired dkehn's vm-gate task, so I'm still looking at that and the python-reddwarfclient packaging tasks 22:14:24 sounds like a RPG 22:14:39 heh, well, still work in progress. 22:14:53 man, done w/ the action items!! 22:15:07 #topic Quotas / Limits Updates 22:15:12 lets start w/ quotas 22:15:17 #action SlickNik looking into vm-gate and release of python-reddwarfclient 22:16:13 Esmute ^^ 22:16:29 ps i will be proposing some /limits calls in teh markdown so yall can say yes/no to them once i push them for review 22:16:40 itll be a bit nicer than seeing htem in the BPs 22:17:05 i have modified the rd client to update quota info for a tenant 22:17:07 wiki's a godo place for now too 22:17:12 hub_cap: where is the markdown going to live? I know you mentioned it earlier 22:18:00 sure hub_cap.. ill review it with esp 22:19:00 in the client, the /quotas will also give you the absolute limits... but that call is admin only 22:19:18 #link https://github.com/stackforge/database-api 22:19:21 demorris: ^ ^ 22:19:36 Esmute: ill be sure to add that to the api doc 22:20:32 so i have a submitted a patch for the quota stuff.. will be adding an integration tests to it and resubmit some times today 22:20:33 hub_cap: Any update from Mike A. on his work for that repo? 22:20:42 hub_cap: what's the process for going from that repo to actual public facing docs 22:20:44 nope grapex :( but thats my fault for not asking 22:20:55 vipul: there is some magic stuff the doc team has built for that 22:20:58 not sure exactly 22:21:15 are we 'non-core' going to be able to leverage the same? 22:21:19 but id prefer starting at the repo instead of wiki, mainly cuz anyone can propose, and only core can commit 22:21:27 vipul: u are core :) 22:21:41 i mean reddwarf is not core 22:22:10 Also, tehre will be some APIs that are WIP, as part of a BP or something, how do we manage what gets into that repo.. should it be stuff already available 22:22:41 maybe this should be part of the api specs topic 22:23:05 vipul: ya we can talk about that in a few, but the thought is to ahve the api spec'd out in the next wk or two w/ everything 22:23:13 ok. 22:23:17 but ya lets chat it up during that, Esmute are u fin? 22:23:28 fin? 22:23:49 finished.. ahh ok 22:23:55 http://en.wiktionary.org/wiki/fin#French 22:23:57 yes. 22:24:06 sweet, now rate limits esp 22:24:14 k 22:24:20 ps looking forward to the quotas stuff Esmute 22:24:28 well I just had one bit to clarify 22:24:58 the current implementation in progress makes use of the same code as nova's limit controller 22:25:08 #link http://api.openstack.org/api-ref.html 22:25:35 it was mentioned (and we stumbled upon) usedlimits last week 22:25:39 yup 22:26:03 #link https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/used_limits.py 22:26:09 is it cool to go with /limits for now? I'm not sure we have all the info from the absolute limits to do /userlimits 22:26:18 sorry usedlimits 22:26:23 esp: it _is_ the same 22:26:35 the used limits extension just puts extra things into /limits 22:26:48 https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/used_limits.py#L82 22:26:55 esp: do you mean we don't ahve enough from the quotas impl to get those values? 22:26:56 ok, for some reason I thought there was more info in usedlimits 22:27:07 vipul: yeah 22:27:11 there is esp 22:27:18 https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/contrib/used_limits.py#L56 22:27:36 with values are you refering too specifically? 22:27:49 esp: Esmute will provide that to you 22:27:54 but the extension just uses /limits 22:28:01 k, sounds like I need to talk with Esmute :) 22:28:01 i can provide quota limits and usages information 22:28:12 thx 22:28:18 cool 22:28:19 i can show you how and where to get them 22:28:24 hub_cap: we're not doing it as ext. right? 22:28:27 just part of the core call 22:28:30 ya i think so 22:28:36 it makes more sense 22:28:42 yea, might as well 22:28:54 yeah it's right in there with the other core api calls at the moment 22:28:56 def. im assuming itll eventully meld together in nova 22:29:16 but they have more extensions than actual calls :P 22:29:35 esp: done? 22:29:38 esp: yea you should be good to go to implement that with Esmute's changes 22:29:40 yes sir 22:29:56 awesome. ill be pinging both esp and Esmute this wk for api feedback 22:30:00 #topic Percona Image Updates 22:30:05 kagan: tag yer it 22:30:05 hi 22:30:15 so, we have it working, pretty much 22:30:29 very nice 22:30:31 it still runs slower (coming up) compared to Oracle's mysql 22:30:51 and also, there are some loose ends i'd like to go over before submitting to review 22:31:03 i think i'll have something checked in for review today 22:31:12 sweet! got the resize fixed? 22:31:15 NICE 22:31:17 we also have some int tests failing, but i think the tests are bad 22:31:46 tests are bad? 22:32:02 well, for example, the resize test 22:32:22 it "waits" for status to change from "resize" to "done" (or running or something like that) 22:32:43 but it seems that as part of regular flow, between resizing and up it goes through status "down" for a while. 22:32:47 so nova does those things 22:32:54 "down"? 22:32:57 Do you mean SHUTDOWN? 22:32:59 shutdown 22:33:02 yep 22:33:03 It shouldn't do that 22:33:04 if for some reason the percona image does something different, i seriuosly doubt that its a bad test :) 22:33:09 If it does, the code has been changed. 22:33:17 i'm not sure mysql doenst do that 22:33:25 it's just being done in 3 seconds or so 22:33:29 or it also could mean nova has changed 22:33:30 compared to 2 minutes with percona 22:33:36 i thought i remember someone saying they added a mysql stop 22:33:39 The Reddwarf task ID is set to "resize", so during the entire resize operation it reports the status as "RESIZE" despite what the resources are doing. 22:33:39 so i think the test just doesn't' catch it 22:33:45 woah, 2 min for a restart... craaaazzyyyyy 22:33:57 you'd never guess why .... 22:34:06 yea, i think the startup differences may be exposing these states 22:34:06 it's 4 executions fof "see" 22:34:16 'sed' 22:34:17 of "see" 22:34:27 damn .. it "fixes" my typos ... 22:34:30 heh 22:34:31 kagan: so yer saying 22:34:34 * cp16net confused 22:34:35 i write sed every time ... 22:34:41 thre is a race condition that we never saw w/ the oracle mysql 22:34:49 i think so. not sure. 22:34:50 since it boots up so quickly 22:34:54 it seems that this is regular flow 22:35:09 If this for the happy path? 22:35:09 but since the guest sees percona mysql as down for 2 minutes it reports the status as such 22:35:10 so i think mysql goes through "shutdown" for several seconds 22:35:18 i guess ... 22:35:21 kagan: a resize does restart mysql 22:35:30 it has to for the new settings 22:35:35 i know 22:35:42 but yer seeing a condition where the guest says its not in resize, but in shutdown 22:35:44 but the code in the test is weird there 22:35:51 is this for create? 22:35:58 cp16net: resize 22:36:03 oh ok 22:36:12 yes, for a while. then it comes up again 22:36:15 so if you stop mysql it will report as shutdown 22:36:15 hub_cap: is it possible that the heartbeat thread see's it as down 22:36:19 correct 22:36:20 cp16net: the mysql stop was for the restart tests, not resize, IIRC. 22:36:23 thats what im getting at vipul 22:36:52 lets take this offline. it seems as if we have a potential race condition we need to account for in the guest 22:37:04 kagan: talk to grapex tomorrow about. it he can help 22:37:10 lol period misplacement 22:37:12 RECAP: resize restarts myslq.. since percona is slow to start up... heartbeat sets guest state as shutdown.. which invalidates the test 22:37:21 Probably the reference guest is behaving differently from Sneaky Pete. 22:37:22 yuup vipul 22:37:40 lets offline it and kagan and grapex can interface later 22:37:46 ok 22:37:47 but good work kagan looking forward 22:38:00 thanks 22:38:01 That's what I'm thinking, in the neighborhood of dbaas.py:658 22:38:03 I think we should amend the test to look for either state 22:38:17 anything else wrt percona kagan? 22:38:29 vipul: or amend the guest to _not_ report shutdown during a resize 22:38:29 don't think so. any more questions ? 22:38:41 nope im good kagan 22:38:46 hub_cap: yea let's discuss that in #reddwarf later 22:38:50 exactly vipul 22:38:54 #topic Snapshots Blueprint Feedback 22:39:07 can someone linke the bp? 22:39:13 #action kagan to socialize fixing resize test or heartbeat thread for the ocrrect state 22:39:15 i know demorris and vipul have chatted on this 22:39:25 #link https://blueprints.launchpad.net/reddwarf/+spec/consistent-snapshots 22:39:27 vipul: thx for that action. good call 22:39:34 #link https://wiki.openstack.org/wiki/Snapshot-design 22:39:46 cool SlickNik beat me to the second link 22:40:00 hehe, i havent had a ton of time to read up on it yet 22:40:04 dkehn's done a good job outlining the fix.. 22:40:13 #action hub_cap read https://blueprints.launchpad.net/reddwarf/+spec/consistent-snapshots 22:40:21 like the wiki write up for it.. very consise 22:40:30 demorris added the API this morning 22:40:32 very nice on the pluggable interface design tho 22:40:47 yep, love the pluggable design piece 22:40:47 vipul: im assuming u havent had a ton of time to look into it? or have ya 22:40:57 <3<3 pluggable 22:41:00 Yea, I'm still making my way through it, but so far so good. 22:41:10 i have some.. but not completely 22:41:28 vipul: you had some comments on the use of a locationRef for the location of the snaps in Swift 22:41:31 however it looks a lot like what we had in our fok 22:41:33 fork 22:41:40 super 22:41:53 demorris: yes, i don't know if location of the snapshot.. where it's stored should be exposed 22:42:09 so if you expose it.. it means you would likely have to expose the swift endpoint.. 22:42:16 vipul: Agreed. 22:42:21 would you expect a snapshot to be portable? 22:42:32 and you may not be using swift as a location 22:42:42 BUT... 22:42:58 if we put the snapshot in the user's tenant's container.. then they'd see it anyway 22:43:14 and probably delete it on accident :P 22:43:16 demorris: not by the user.. that's up for discussion i guess 22:43:28 demorris mentioned that delete would have to be reconciled by reddwarf 22:43:31 hub_cap: Not if we name the container "PLZ_NO_DELETE" 22:43:41 lol@grapex 22:43:46 grapex: thats the plan!! hidden_dont_look_here 22:43:52 vipul: well i saw that in the DB design 22:44:02 too bad swift doesn't have hidden containers 22:44:08 and nova doesn't have hidden instances 22:44:09 there was a "deleted" column 22:44:25 demorris.. that would be for user api delete 22:44:35 that said 0 or 1, seemed to indicate it was there to store information on if the snap went missing in Swift 22:44:38 not behind the scenes delete.. so we could account for both if we need to 22:44:45 vipul: vishy has already said he would welcome "managed vms" in nova 22:44:53 vipul: i see 22:45:02 hub_cap: yea we need to file the BP on that one 22:45:22 So.. i'm open to either implemenation.. location visible or not 22:45:23 yup ive mentioned it at like 3 summits :P but never worked on it 22:45:31 "managed vms"? 22:45:37 or 'hidden vms' 22:45:48 vms that a user cannot modify but are on their account 22:45:51 i dont like hidden 22:45:51 so we can create them on behalf of customer and they don't see them on a 'nova list' 22:46:01 ah, I see. 22:46:04 they should be able to see them imho, but not touch them 22:46:11 if im payin for em, i wanna be able to see em 22:46:14 :D 22:46:15 hub_cap: yea i could live with that 22:46:17 I guess my thought was that a user "may" want to get at their snapshot directly in Swift, if they wanted to port it somewhere either within the DC or out somewhere else 22:46:40 demorris: but we may consider encrypting or something.. not sure if they'd be useful... 22:46:46 so providing a locationRef, simplifies knowing where it is 22:46:49 demorris: but we do need to think about portability 22:47:06 ya cuz the system wont know about it w/o some sort of import_snapshot 22:47:26 to link up the moved snap from somewhere else in the DC as per your point demorris 22:47:30 yeah, I would think you want to have optional encryption, and potentially for a user to specify their keys (if you go that route)…then they have the keys so if its encrypted they can decrypt 22:47:42 can i download my snapshot and run it in my dc? 22:47:55 demorris: yea that would work well.. just another set of APIs though 22:47:57 cp16net: thast what morris wants, and i think thas a good idea 22:48:08 id say lets go easy route for v1 22:48:11 hub_cap: so we need to scope this down 22:48:13 makes things very portable 22:48:14 I think that should be supported, we don't want to lock data in 22:48:31 i think maybe for v1 we take out portability? 22:48:37 ya i can live w/ that 22:48:48 snaps themselves will be a lot of work 22:48:49 well I would say we are not taking it out, but it would be making it harder 22:48:54 seems like a addd on 22:48:56 Make it optional but not required by the spec? 22:48:59 we can still leave location in... and they may be able to delete behind our back 22:49:10 hell vipul htey can do that w/o a location :) 22:49:18 heh yea 22:49:26 yeah, if you use the customers Swift account, the chance of accidental deletion is always there 22:49:27 so lets think aobut it this way 22:49:36 _adding_ to the api doesnt require version updates 22:49:40 _removing_ does 22:49:51 if we are unsure for now, lets leave it out and add it on as we flesh out the details 22:49:54 right.. hide location for now then 22:49:58 k 22:50:01 sounds good. 22:50:03 thats true for many things tho demorris 22:50:04 i do think its a good idea tho 22:50:14 but a bit too much implication-wise for now 22:50:32 another thought 22:50:37 would be really nice for off 'site' backup 22:50:42 what if the user that makes a call to Reddwarf doesn't ahve Swift role 22:50:50 cp16net: def 22:51:00 vipul: snapshots fail w/ a useful error message :) 22:51:17 hub_cap: alright.. that would work 22:51:25 we should do a HEAD on the container we decide to use b4 we do a snap 22:51:31 from the api/taskmanager 22:51:36 how about http code 418? 22:51:39 itll validate the container exists 22:51:42 is that teapot cp16net? 22:51:45 "i'm a teapot" 22:51:48 cp16net: lol 22:51:50 :) 22:51:59 * hub_cap shakes head at cp16net 22:52:17 ok do we feel good wrt snapshots for now? 22:52:28 how do y'all expect restoring snapshots to work? My take is that any restore of a snapshot goes into a new instance…not an existing instance. To remove the chance of a customer blowing away portions of data on their instance… 22:52:33 #action vipul to update snapshots design to call out swift role required, and behind the back deletes 22:52:57 demorris: that's my thought as well, only supported on 'create' 22:52:58 demorris: that sounds reasonable, vipul and co, thoughts? 22:53:02 i.e. we support create instance from snapshot 22:53:09 but not import to existing instance 22:53:13 yep, no other form of restore 22:53:18 #agreed 22:53:38 demorris: i think i saw that in the api, correct? 22:53:40 I think that's reasonable for a v1. 22:53:46 hub_cap: correct 22:53:54 any issues around the "statusDetails" in the API? 22:54:30 demorris: it makes some sense, since a DELETED could have happned in multiple ways 22:54:32 it was called notes in the DB design, I changed it…that can be used to display any details on the status…FAILED, DELETED, RUNNING, etc… 22:54:44 but.. i don't know if we _need_ to have it 22:54:58 and if we have it, does it belong in API response 22:55:21 do we have precedence for it in other 'statuses'? 22:55:30 would be hard to have it in the response since its async 22:55:30 i guess service_statuses does 22:55:46 how will the customer be able to see that the isntance they just created is tied to the snapshot or does that matter? 22:55:51 i htink our precedence is that we have terrible responses if async things fail 22:55:57 maybe we shouldnt try to shoehorn that into snaps 22:56:09 cp16net: I don't know that matters.. since it's only a point in time that they are linked 22:56:12 and make it inependent to work for any async/status 22:56:14 i will ALWAYS be pro fields like this, because it allows us to actually tell the user what happened 22:56:25 ok just curious 22:56:26 instead of having them scratch their head and hit submit a few more times :) 22:56:36 demorris: sure but wouldnt u be pro "give me a way to do it for any async call" 22:56:47 I agree... I really hate how the resize status stuff works. There's no way to know if the job failed except if the flavor doesn't change. 22:56:47 it shoudl be generic enough to use for any of the cast's 22:57:04 id say lets put a note that we need to make that generic and file a separate BP 22:57:07 yes, but I don't want y'all to take on the overhead of building out an async model right now 22:57:17 demorris: id rather build that then hack it and have to change it 22:57:33 hub_cap: can you build it in a week? 22:57:36 hub_cap: so i'm hearing that we leave it out of the snapshots impl 22:57:41 :) 22:57:42 demorris: of course not 22:57:42 and come back around 22:57:44 demorris: Where did you want statusDetails to appear again? 22:58:02 vipul: i think so, its a great idea but i feel like itd be a hack on snapshots and wont go anywhere 22:58:08 grapex: check List snapshots in the spec 22:58:22 grapex: in the API responses, methinks. 22:58:26 hub_cap: Yes, i am good with that.. 22:58:36 since it seems to encompass more than snapshots 22:58:57 correct vipul 22:59:18 so the main reason im against 22:59:18 I think having status details per snapshot is ok- 22:59:25 #action hub_cap, grapex to file blueprint on detailed status messages for async operations 22:59:27 is that if we have to alter in any way we have to rev the api 22:59:42 because like I think has been mentioned, we could query swift and see if the underlying object was deleted as well as provide status on the back up process 22:59:50 respond with 202 accepted 22:59:53 #action vipul to remove status Detail from snapshots API/impl 23:00:02 with a link to the status url 23:00:18 So we already have task description in the Reddwarf API 23:00:59 the MGMT api returns it. But all we can really do is show things like if something went wrong in the build process 23:01:37 Like for resizes, we can't make use of it for various reasons - it can't represent a unique thread of execution in the scheme of things, just a resource 23:01:54 grapex: i agree its someting we need, we should focus resources on it. its been a pain point for a VERY long time 23:02:10 so for the snapshots resource, when you provision it I can see we could add info there. 23:02:15 #AGREE 23:02:53 * hub_cap hears grapex feverishly typing 23:02:54 If we need to store status during operations like restore or something else I can't think of atm it won't work, because once the resource is prov'd status's duty is just to report the state of the resource and not the success of some job 23:03:33 I'm just saying, having a statusDetails are ok, because we do it provisioning Reddwarf instances now. 23:03:41 It only works for the provisioning process. 23:03:55 * vipul wishes we had phone meetings 23:04:00 vipul: bah 23:04:10 demorris: agrees 23:04:12 hub_cap: Well they say that brevity is the location from which originates the quickness of the mind. 23:04:21 wishes everyone could type 120wpm 23:04:26 lol 23:04:32 i get lost in here too easily 23:04:38 ok i think this needs further discussioni 23:04:41 i'm lost as well 23:04:43 #agreed 23:04:55 i'm going to try to set up a phone meeting 23:04:58 for this week 23:05:01 #action continue discussion on statusDetails 23:05:05 to go over snapshot details 23:05:08 okey 23:05:18 what I heard was we are going to remove statusDetails in favor of a more defined model for checking on details of async calls 23:05:29 I agree generally with hub_caps point, I'm just not sure it applies to *this* feature. Anyway, lets table it for now. 23:06:07 demorris: correct, i think :) 23:06:10 yea this seems like a wider topic than snapshots.. so we could go either way... figure out a way to get this implemented for snapshots 23:06:14 yup 23:06:14 we can still have it in the database and then optionally add it to the API as we learn more…additive okay, contract changes bad 23:06:20 demorris: correct 23:06:33 id rather go w/ less for now and add stuff 23:07:00 ok... btw.. someone was going to post a state diagram for RD states 23:07:10 robermeyers ^? 23:07:18 robertmeyers 23:07:31 not I 23:07:33 #link http://s3-2.kiva.org/img/w800/196261.jpg 23:07:36 LOL! 23:07:38 lol 23:07:39 heh 23:07:40 That's what I was thinking 23:07:56 is that bubble gum? 23:08:03 vipul: I wasn't able to find one 23:08:10 cp16net: im sad for u 23:08:16 flying spaghetti monster 23:08:19 FSM 23:08:23 jcru: ah ok, there is a lot of shit going in there... 23:08:46 hah 23:08:54 hub_cap: I had spaghetti for lunch! 23:08:59 esp: flying? 23:09:03 with eyes? 23:09:13 so we need to do that, i agree. ill start it if no one else is on it 23:09:15 no, but a lot of it did end up on my shirt. 23:09:20 haha 23:09:24 done with snaps? 23:09:29 hub_cap: I can work on it 23:09:31 demorris: aye 23:09:31 hub_cap: probably makes sense someone from rax 23:09:35 jcru: cool 23:09:40 FSM = Flying Spaghetti Monster = Finite State Machine... 23:09:46 #action jcru to work on a FSM (finite state machine) 23:09:52 lol i thought the same thing SlickNik just now 23:09:54 lol 23:09:54 u beat me to it 23:10:11 moving on? 23:10:27 yea, i'll set up a follow up meeting (if we still tink it's necessary) 23:10:32 ill keep the next section short since not much has happened 23:10:36 vipul: i say lets try to discuss in chat 23:10:43 and if it no workie lets talk about it 23:10:47 ill chat w/ the team here 23:10:57 were you talking about something like this? https://github.com/cp16net/reddwarf-integration/blob/6f672c3201e00f1b3241e25ce9c33b5881a24ec6/tests/graphics/reddwarf-overview.gv.jpeg 23:10:57 hub_cap: kk 23:11:14 cp16net: that's more of a component diagram 23:11:23 cp16net: more like resize->stopped 23:11:31 bad example of course :P 23:11:39 right.. what states should instance be in when we do resize.. or whatever 23:11:48 roger 23:12:02 but we coudl build w/ viz 23:12:11 #topic API Spec General Update 23:12:39 so im working on this starting tomorrow. will set up all the docs we have + snaps, limits, and anything else that is already in flight or in a BP 23:12:52 ill push as a work in progress review to gerrit so that everyone can see 23:13:02 hub_cap: there is probably stuff that is not in BP's that need to be in the spec 23:13:05 ok, i think i need some info on what goes there.. 23:13:06 demorris: def 23:13:19 stuff that's part of BP process goes into this repo? 23:13:28 or just the actual implemented v1.x api 23:13:55 https://github.com/openstack/identity-api/blob/master/openstack-identity-api/src/markdown/identity-api-v3.md 23:14:09 vipul: essentially we need to (as a collective group) define the api 23:14:17 and then each developer will implement pieces of it 23:14:43 one I think that is not there is maintenance windows 23:14:45 hub_cap: How do these .md files relate to the docbook files? 23:14:46 and of course we will find small things that need changing 23:14:54 grapex: im not sure exactly 23:15:20 hub_cap: i think similar question.. are we going to be visible in api.openstack.org 23:15:23 but thats the way the api guys are working on it _today_ 23:15:24 if not there, then where 23:15:43 vipul: we can work that out i think 23:15:50 maybe stackforge can host a api.stackforge.org 23:15:53 hub_cap: Cool, I don't mind .md files. 23:16:10 to mirror what openstack is doing wrt the api docs 23:16:10 also do we need to wadl in addition to md? 23:16:11 404 23:16:17 or is there some auto generation happening 23:16:17 cp16net: lol likely 23:16:21 hub_cap: I will add a BP for this - As a Reddwarf User, I need to the ability to manage MySQL version updates, so that I can minimize unplanned downtime and plan accordingly for my production application environments. 23:16:30 vipul: the auto gen will hapeen from the wadl 23:16:31 demorris: plz do 23:16:49 essentially we add an attribute for - "maintenanceWindow":"2012-03-28T21:30Z/2012-03-28T22:00Z" 23:16:52 so yes we will need to impl the wadl as a part of all this, but the markdown is the fastest way to get things working 23:17:03 and i thnk thats why the teams have done this 23:17:09 get it up, get it reviewed 23:17:20 wait im wrong 23:17:34 the wadl are autogen'd now via that fancy fandangled python api 23:17:39 it ties in to the versions-types BP i submitted 23:18:06 honestly its in flux, and these are good questions that need answering, and i dont have the answer 23:18:23 demorris: do you report oracles vs percona in type? 23:18:33 but we need something that only a few people can commit to, in the repo, and markdown works very well w/ github 23:18:38 so im assumign thats why they went that route 23:18:54 vipul: i think thats part of the BP 23:18:57 hub_cap: ok, i think once you put in a framework things will fall into place 23:19:08 or at least shake out a bit more vipul ;) 23:19:19 yeah, check here - https://wiki.openstack.org/wiki/Reddwarf-versions-types 23:19:20 both good things 23:19:23 but its good to get something up to chat about / implement 23:19:36 def 23:19:44 there is a name / version attribute that lets you distinguish 23:20:14 hub_cap: Does the wadl live with the doc repo? 23:20:19 a name would be a DB type + Major Version (Percona / Maria / MySQL , etc.) 23:20:25 Or is it an artifact of some build process? 23:20:28 grapex: best of my knowledge the wadl is generated 23:20:36 but i cant answer definitively 23:20:40 demorris: Ok, making sense now 23:20:51 vipul: we can tweak if needed 23:21:01 i think Percona should be a separate type, agree? 23:21:26 Yeah, it would be like "Percona 5.5" 23:21:35 Ok, cool. 23:21:42 or whatever it is you are using, will be up to provider to specify 23:21:54 then versions is used for minor versions 23:21:57 im assuming we have migrated topics 23:22:01 heh 23:22:02 yes 23:22:05 whoops 23:22:13 back on API spec 23:22:20 another BP coming will be my.cnf's 23:22:22 lots of interesting things to discuss.. can't contain ourselves 23:22:27 expect to see that in the next week 23:22:36 w00t demorris 23:22:53 and hub_cap will roll that in the API spec 23:22:58 will be lots to review and discuss 23:23:01 but all good things! 23:23:05 #topic open discussion 23:23:07 #forwardprogress 23:23:24 #agreed 23:23:31 demorris: http 501 23:23:37 still waiting on that vm-gate review to _get reviewed_ 23:23:53 sweet tho, progress!! 23:24:17 hub_cap we want to gate both rd-int and rd repos with that correct? 23:24:19 alright, I need to run, thanks guys, i just might start showing up at these more often! 23:24:27 vipul: def 23:24:36 demorris: <3 for participating 23:24:48 hub_cap: :) 23:25:00 Yeah, I noticed that there are some whitespace issues like extra tabs/spaces that I'm going to clean up with that vm-gate review. I'll re-ping Openstack CI folks once I'm done with that. 23:25:09 Oh another though.. us core reviewers have been slacking 23:25:25 #agreed 23:25:32 there's a ton of stuff.. let's try to get it pushed through 23:25:35 me especially 23:25:43 other teams have done review days 23:25:53 and other other teams have done 1 hr per day 23:26:08 we probably should try to get something like that in our schedule 23:26:09 Me too... I've been busy with some stuff here lately and haven't been paying close enough attention. Sorry if this impeded anyone. 23:26:44 other thing to note, if u have a work in progress, mark it as such via that fancy button 23:26:54 Same here, got busy with stuff and the emails kept piling. Will pay closer attn. to stuff in the pipeline. 23:26:56 https://review.openstack.org/#/c/21989/ <-- example 23:27:21 it will make it easier for us to not let WIP's slip in 23:27:34 esp ^ 23:27:44 It'd be nice if watched changes kept the work in progress items at the top of the list. 23:28:00 it labels them tho at least 23:28:06 True. 23:28:08 like [work in progress] 23:28:15 there deosn't seem to be an ordering 23:28:15 esp: WIP your commit 23:28:20 I shouldn't complain, Gerrit's been a very enjoyable experience. 23:28:21 just what you looked at last goes to top 23:28:22 there is vipul 23:28:26 its the order at what u looked at 23:28:27 exactly 23:28:37 yep 23:28:40 grapex: def its nice 23:28:59 https://review.openstack.org/#/q/is:watched+status:open,n,z 23:29:09 there is a good bit of changes to be pushed thru. lets get awn it 23:29:19 yep 23:29:21 only 30 mins over 23:29:23 :) 23:29:50 i think we can wrap it up? 23:30:32 Nothing else from my side. 23:30:36 I think that's a go. 23:30:59 okey let wrap 23:31:02 #endmeeting