20:59:35 <hub_cap> #startmeeting reddwarf
20:59:36 <openstack> Meeting started Tue Apr  2 20:59:35 2013 UTC.  The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:59:37 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:59:39 <openstack> The meeting name has been set to 'reddwarf'
20:59:46 <datsun180b> hello
20:59:52 <robertmyers> hello
20:59:55 <hub_cap> as usual, >>> time.sleep(120)
21:00:12 <djohnstone> hi
21:00:19 <vipul> hola
21:00:27 <SlickNik> hey there
21:00:49 <annashen> hi
21:01:03 <esp1> hello
21:01:04 <cp16net> present
21:01:22 <hub_cap> #link https://wiki.openstack.org/wiki/Meetings/RedDwarfMeeting
21:01:30 <hub_cap> #link http://eavesdrop.openstack.org/meetings/reddwarf/2013/reddwarf.2013-03-26-20.59.html
21:01:47 <juice> greetings
21:01:55 <imsplitbit> greets
21:02:07 <imsplitbit> we ready to get this party started?
21:02:14 <hub_cap> yup
21:02:25 <hub_cap> #topic Action items
21:02:32 * juice is doing a shot
21:02:36 <hub_cap> nice
21:02:41 <hub_cap> someone snag grapex
21:02:44 <vipul> where's the alcohol?
21:02:45 <SlickNik> Let's do it.
21:02:50 <datsun180b> he's trying to resolve his connection
21:02:51 <hub_cap> and smack him w/ a trout
21:03:01 <hub_cap> ok datsun180b, still smack him w a trout, hes on a mac
21:03:10 <hub_cap> so ill skip him for now
21:03:24 <hub_cap> my action item is next, and i actually added it to the agenda today
21:03:33 <hub_cap> so ill skip it till then (action / action items)
21:03:54 <hub_cap> vipul: yer next, patch for backUps to database-api
21:04:09 <vipul> yea, haven't gotten around to it
21:04:16 <vipul> promise to do it this week!
21:04:28 <vipul> #action Vipul to publish backup API to database-api
21:04:28 <hub_cap> cool, can u re-action it
21:04:42 <hub_cap> so backupRef vs backupUUID
21:04:58 <hub_cap> i believe we decided backupRef was fine right, but it could jsut be a uuid no biggie?
21:05:01 <hub_cap> oh nm
21:05:04 <hub_cap> i emailed jorge
21:05:06 <hub_cap> he never emailed me back
21:05:08 <hub_cap> he hates me
21:05:13 <SlickNik> #action SlickNik to finish publishing security groups API to database-api
21:05:13 <hub_cap> ill send him another email
21:05:19 <hub_cap> <3 SlickNik
21:05:35 <SlickNik> I remember I started on that one, but I still have a couple of changes to it.
21:05:40 <hub_cap> #action get ahold of jOrGeW to make sure about backupRef vs backupUUID
21:05:43 <vipul> SlickNik yea i think that was abandoned
21:05:57 <vipul> everuy openstack project seems to do this shit differently
21:06:02 <vipul> in terms of uuid vs ref
21:06:06 <SlickNik> I have the change, just haven't gotten around to making the couple of changes I wanted.
21:06:08 <hub_cap> i know vipul...
21:06:11 <hub_cap> there is no standard
21:06:14 <hub_cap> its terrible
21:06:23 <SlickNik> yeah, sucks.
21:06:32 <vipul> no wonder George Reese is always putting the smack down
21:06:34 <hub_cap> personally, i think of a ref being a remote system
21:06:39 <hub_cap> oh god nooooooo
21:06:45 <vipul> he's on my twitter feed
21:06:49 <vipul> big time complaints :)
21:06:54 <SlickNik> Who's George Reese?
21:07:04 <vipul> enstratus
21:07:04 <hub_cap> SlickNik: search your openstack mail
21:07:08 <SlickNik> And why are his peanut butter cups awful?
21:07:17 <hub_cap> grapex: !!!
21:07:24 <grapex> hub_cap: That was awesome
21:07:25 <SlickNik> hub_cap: will do…
21:07:33 <grapex> What's up?
21:07:35 <hub_cap> ok so back to ref vs uuid, in my brain
21:07:38 <hub_cap> a ref is remote
21:07:41 <hub_cap> and a uuid is local
21:08:03 <datsun180b> lacking context, it seems the 'uu' part of 'uuid'  disagrees with that
21:08:15 <robertmyers> nice
21:08:17 <vipul> heh... had to bring that up
21:08:32 <hub_cap> sorry my definition of local is not correct
21:08:39 <hub_cap> local being controlled by reddwarf
21:08:47 <hub_cap> as opposed to controlled by a 3rd party system
21:08:53 <hub_cap> yes uu still applies :)
21:09:02 <vipul> in that case we should go with UUID then
21:09:12 <hub_cap> well thats what im leaning toward
21:09:16 <hub_cap> but let me just touch back w/ jorge
21:09:25 <hub_cap> its fairly easy to change right?
21:09:30 <vipul> Yea pretty minor
21:09:31 <robertmyers> how abut backupID?
21:09:32 <vipul> it's ref now
21:09:38 <robertmyers> not UUID
21:09:40 <hub_cap> Id might be better than uuid
21:09:42 <vipul> yea that's probably better
21:09:47 <SlickNik> It's ref now, but easily changed...
21:09:59 <hub_cap> so BackUpId
21:10:00 <hub_cap> :P
21:10:00 <SlickNik> I like backupId
21:10:06 <grapex> Sorry gang, my IRC client had a visual glitch- are we talking about using ID or UUID for what's now known in the client as "backupRef?"
21:10:09 <vipul> k, we'll wait for Jorje?
21:10:11 <vipul> jorge
21:10:20 <hub_cap> ya but lets lean heavily toward backupId
21:10:21 <SlickNik> yes, grapex
21:10:26 <grapex> Ok
21:10:30 <vipul> steveleon: can we change that now then?
21:10:35 <vipul> esmute ^
21:10:38 <hub_cap> grapex: lets go back to yoru action item
21:10:50 <hub_cap> xml lint integration in reddwarf grapex
21:11:22 <grapex> Sorry, still nothing so far. Had a conference last week which threw me off track.
21:11:30 <hub_cap> fo sure, plz re-action it
21:12:04 <grapex> Ok
21:12:10 <hub_cap> ok so lets say we are done w action items
21:12:22 <esmute> yeah we can change it..
21:12:33 <esmute> i will have to change the rd-client that just got merged a few hours ago too
21:12:34 <hub_cap> #topic Status of CI/jenkins/int-tests
21:12:43 <hub_cap> esmute: okey
21:12:48 <vipul> esmute: thanks
21:13:08 <vipul> So not having a int-test gate has been killing us it seems
21:13:11 <hub_cap> vipul: its all yo mang
21:13:28 <vipul> SlickNik and I are working on a jenkisn here at HP that will listen to gerrit triggers
21:13:33 <vipul> and give a +1 / -1 vote
21:13:41 <hub_cap> ok it can be nonvoting too
21:13:47 <vipul> got it sort of working, but the plugin we use to spin up a VM need some love
21:13:49 <hub_cap> if thats hard ro anything
21:13:53 <hub_cap> AHH
21:13:54 <datsun180b> should it be more like check/cross then?
21:13:58 <hub_cap> are u using the jclouds one?
21:14:06 <vipul> no, home grown
21:14:06 <hub_cap> datsun180b: j/y
21:14:09 <vipul> jruby thingie
21:14:11 <datsun180b> considering that builds that don't pass int-tests aren't worth shipping?
21:14:12 <hub_cap> oh fun vipul
21:14:16 <SlickNik> nope, it's something that one of the folks here came up with.
21:14:19 <hub_cap> datsun180b: correct
21:14:29 <vipul> Yea but i think we want voting
21:14:34 <hub_cap> SlickNik: do you guys adhere to the openstack api internally? if so the jclouds one is bomb
21:14:55 <vipul> hub_cap: We shoudl try the jclouds one.. honestly haven't event tried it
21:15:00 <SlickNik> Yeah, it needs a couple of changes to be able to pass the gerrit id's from the trigger to the new instance it spins up.
21:15:01 <hub_cap> its great
21:15:10 <hub_cap> itll spawn a server, if it fails itll spawn a new one
21:15:13 <hub_cap> it sets up keys to ssh
21:15:21 <hub_cap> it does a lot of work for u
21:15:23 <SlickNik> hub_cap, do you have a link to the jclouds plugin you speak of?
21:15:24 <vipul> one other thing missing is checking to see if tests passed or not..
21:15:44 <hub_cap> https://wiki.jenkins-ci.org/display/JENKINS/JClouds+Plugin
21:15:46 <hub_cap> #link https://wiki.jenkins-ci.org/display/JENKINS/JClouds+Plugin
21:15:47 <vipul> currently can run them, but no check to see if it worked properly
21:15:53 <SlickNik> hub_cap: thanks!
21:16:02 <datsun180b> grep for OK (Skipped=)
21:16:05 <hub_cap> vipul: ahh, i think that the jclouds plugin will fix that
21:16:07 <datsun180b> at minimum
21:16:16 <vipul> yea, that's something we're trying to get added to our jruby plugin
21:16:23 <hub_cap> itll fail if the int-tests emit a error code
21:16:24 <vipul> hub_cap: Jclouds does that already?
21:16:34 <datsun180b> even better
21:16:37 <hub_cap> well u just tell jclouds to exectue X on a remote system
21:16:40 <hub_cap> and if X fails, it fails teh job
21:17:18 <vipul> hub_cap: so jenkins plugin is building ajenkins slave? or arbitrary vm
21:17:28 <vipul> cuz i don't care for the jenkins slave.. just want a vm
21:17:56 <hub_cap> vipul: thre is not much difference between them, but it can easily do arbitrary vm
21:18:14 <grapex> vipul: Does StackForge CI use the jclouds plugin and make it an official Jenkins slave or does it just create a VM without the jenkins agent?
21:18:16 <datsun180b> i like the idea of int-tests running on a machine that doesn't persist between builds and so doesn't rely on manual monkeying for tests to work
21:18:25 <vipul> they have a pool of servers grapex
21:18:30 <hub_cap> it _is_ a slave in terms of jenkins but thats convenient for making sure the node comes online etc
21:18:31 <vipul> not sure exactly how they allocate them
21:18:36 <esp1> datsun180b: mee too.
21:18:52 <SlickNik> They have home-grown scripts to allocate them…
21:18:55 <vipul> datsun180b: yep, fresh instance each time
21:19:14 <hub_cap> anyhoo, i say look into it
21:19:16 <grapex> SlickNik: home-grown instead of using the jenkins agent?
21:19:20 <hub_cap> it may or may not
21:19:23 <hub_cap> work for u
21:19:26 <grapex> I'm not pro or con Jenkins agent btw, just curious
21:19:35 <hub_cap> grapex: the ci team? ya its all homegrown :)
21:19:43 <vipul> Yea... so still a WIP.. i think we need to give this a bit more time..
21:19:50 <vipul> BUT we're getting close
21:19:56 <vipul> last week all tests passed
21:20:03 <hub_cap> hell yes
21:20:15 <hub_cap> even if its nonvoting and it triggers and we can just look @ it b4 approving
21:20:18 <hub_cap> thats a good step 1
21:20:24 <hub_cap> lets just get it runnin
21:20:33 <SlickNik> We get the voting part from the gerrit trigger.
21:20:36 <hub_cap> so we can stop comitting code that fails
21:20:43 <vipul> yep, can't wait
21:20:44 <datsun180b> +40
21:20:57 <cp16net> +!
21:20:57 <hub_cap> im fine w/ it always voting +1 since it doesnt tell if it passes or fails yet
21:21:00 <SlickNik> And I've set up the accounts to be able to connect to gerrit.
21:21:05 <hub_cap> lets just get a link put up
21:21:26 <vipul> OH and need to do openID integration
21:21:27 <esp1> yeah, we probably need to run the int-tests locally before checking
21:21:32 <hub_cap> rather than taking it to the finish line fully working
21:21:41 <vipul> #action Vipul and SlickNik to update on status of VM Gate
21:21:43 <esp1> I meant checking in
21:21:45 <hub_cap> lets get a baton pass by getting it running for each iteration asap ;)
21:21:46 <SlickNik> agreed hub_cap
21:21:59 <SlickNik> We need this goodness! :)
21:22:02 <hub_cap> yup
21:22:17 <hub_cap> #action stop eating skittles jelly beans, they are making me sick
21:22:26 <hub_cap> ok we good on ci?
21:22:27 <SlickNik> thanks for actioning, Vipul
21:22:32 <vipul> i think so
21:22:35 <hub_cap> #Backups Discussion
21:22:38 <hub_cap> status first
21:23:00 <vipul> i thnk juice / robertmeyers / SlickNik you're up
21:23:29 <robertmyers> backups are good, lets do it
21:23:34 <SlickNik> So we got a sweet merge from robertmyers with his streaming/mysqldump implementation...
21:23:56 <SlickNik> to our shared work in progress repo.
21:24:04 <hub_cap> robertmyers: lol
21:24:09 <robertmyers> we need a good way to run the restore
21:24:40 <hub_cap> are we trying to get the backup _and_ restore in one fell swoop?
21:24:46 <hub_cap> or are we going to break it up to 2 features?
21:24:49 <SlickNik> I am working on hooking up our innobackupex restore implementation to it. (Testing it out now, really)
21:24:55 <hub_cap> since we havent nailed the api to omuch for the restore etc
21:25:03 <vipul> i think we have hub_cap
21:25:06 <vipul> that's the backupRef peice
21:25:08 <robertmyers> well, we should at least have a plan
21:25:14 <SlickNik> I think esmute has the API/model pieces for both ready to go.
21:25:15 <hub_cap> i agree we need a plan
21:25:22 <hub_cap> oh well then SlickNik if its taht easy
21:25:41 <SlickNik> I'll let esmute comment.
21:25:42 <SlickNik> :)
21:25:43 <vipul> hub_cap: so the backupRef vs backupId discussion is related to restore
21:25:53 <esmute> yup... just need to do some renaming from ref to id
21:25:54 <vipul> where we create a new instance, and porivde the backup ID
21:25:58 <vipul> that's the API
21:25:58 <robertmyers> there may be extra things we need like to reset the password
21:26:01 <SlickNik> But the plan was to check with him and push those pieces up for gerrit review.
21:26:02 <vipul> am i missing something?
21:26:27 <juice> robertmyers: could they do that after the restore?
21:26:30 <esp1> robertmyers: I was wondering about that.
21:26:56 <robertmyers> we could do it automatically after the restore
21:27:15 <vipul> which password? the root user? os_admin?
21:27:15 <esp1> should they get the original password by default?
21:27:17 <robertmyers> they may want to use all the same uses/etc
21:27:29 <juice> robertmyers: who would the user get the new password
21:27:54 <robertmyers> I think the root mysql user password will need to be reset
21:27:56 <esp1> juice: it comes back in the POST response body for create
21:28:06 <vipul> so i thought we'd slide the restore piece into the current 'prepare' workflow which i beleive does that after the data files ar ein place?
21:28:10 <hub_cap> robertmyers: yes it would
21:28:19 <esp1> or you can do a separate call as robertmyers said
21:28:34 <hub_cap> and it shouldnt be enabled by default since new instances dont come w/ root enabled by default
21:28:52 <esp1> got it
21:28:57 <hub_cap> <3
21:29:18 <hub_cap> and the osadmin user/pass might be goofy
21:29:46 <hub_cap> im assuming we are pullin in the user table
21:30:03 <hub_cap> so given that, we will have a user/pass defined for that, as well as a root pass
21:30:09 <robertmyers> that is the plan, a full db backup
21:30:12 <SlickNik> Yeah, we're pullin in the user table as part of restore.
21:30:25 <hub_cap> so we might have to start in safe mode, change the passes, and then restart in regular mode after writing osadmin to the config
21:31:03 <SlickNik> What's the behavior if a db with root enabled is backed up?
21:31:07 <SlickNik> (on restore)
21:31:23 <hub_cap> id say do _not_ enable root
21:31:26 <hub_cap> no matter what
21:31:34 <robertmyers> #agreed
21:31:37 <hub_cap> cuz that needs to be recorded
21:31:44 <hub_cap> and it becoems a grey area for support
21:31:58 <SlickNik> So the restored instance is the same except with root _not_ enabled…?
21:32:16 <hub_cap> correct
21:32:24 <hub_cap> since enable root says "im giving up support"
21:32:31 <vipul> and a different password for os_admin
21:33:02 <SlickNik> gotcha vipul.
21:33:34 <hub_cap> so great work on status team
21:33:38 <hub_cap> now division of labor
21:33:41 <hub_cap> whos doin what
21:33:45 <hub_cap> cuz i aint doin JACK
21:33:47 <SlickNik> Just one clarification.
21:33:50 <hub_cap> wrt backups
21:33:52 <hub_cap> yes SlickNik sup
21:34:06 <SlickNik> So it's fine for them to backup from an instance on which they have given up support and restore to an instance for which they will have support.
21:34:41 <hub_cap> hmm thats a good point of clarification
21:34:43 <vipul> that's tricky..
21:34:50 <vipul> cuz we don't know what they may have changed
21:34:56 <hub_cap> ok so given that
21:35:04 <hub_cap> lets re-record that root was enabled on it
21:35:12 <hub_cap> and keep the same root pass
21:35:20 <juice> sounds like the right thing to do
21:35:22 <hub_cap> really its the same database
21:35:30 <grapex> imsplitbit: for some backup strategies, doesn't the user info get skipped?
21:35:33 <hub_cap> so the ony pass that should change is os_admin cuz thats ours
21:36:06 <grapex> Maybe I'm thinking back to an earlier conversation, but I remember this came up and the idea was the admin tables wouldn't be touched on restore.
21:36:21 <hub_cap> grapex: i think that was b4 mysqldump righ?
21:36:46 <vipul> grapex: you can choose which tables to include in backup
21:37:00 <grapex> hub_cap: Maybe, it seems like I might be speaking crazy here.
21:37:09 <robertmyers> I think we want to do all and get the users
21:37:09 <hub_cap> i think grapex is alluding to a conversation that was had regarding a internal product here
21:37:11 <grapex> vipul: For iter 1 let's just do the full thing and remember what the root enabled setting was
21:37:41 <hub_cap> yup just make sure to re-record the root enalbed setting w/ the new uuid, and leave teh root pass the same
21:37:44 <grapex> hub_cap: No, it was earlier than that, talking about import / export... n/m. For iter 1 you guys are right and we should record the setting.
21:37:45 <SlickNik> grapex / vipul / hub_cap: I like grapex's iter 1 idea for now.
21:37:46 <hub_cap> update os_admin
21:37:46 <vipul> not sure about mysqldump.. but xtrabackup supports expressions to exclude /incdlue tables
21:37:57 <hub_cap> vipul: same w/ mysqldump
21:38:05 <vipul> so then we could make a call now
21:38:15 <vipul> if we want to start with a fresh set of users each time
21:38:19 <vipul> then we just exclude it now
21:38:32 <hub_cap> naw i dont think so, it doesnt make sense to _not_ include users
21:38:39 <grapex> For imports it might-
21:38:49 <hub_cap> even if so, u still have the root enalbed issue
21:38:51 <grapex> that's down the road though
21:38:51 <robertmyers> well, I think that can be set by the implementation
21:38:51 <vipul> and to get manageability on restore back
21:38:54 <SlickNik> Well, from a users perspective, if I've done a bunch of work setting up new users, I don't want to have to redo that on restore, though...
21:38:58 <vipul> yea nvm
21:39:02 <vipul> you still have other tables
21:39:09 <hub_cap> lets go w/ all or nothing as of now :)
21:39:16 <hub_cap> we are devs, we dont know any better :D
21:39:24 <hub_cap> let some mysql admins tell us we are doing it wrong later
21:39:25 <hub_cap> ;)
21:39:42 * grapex blows horn to summon imsplitbit
21:39:42 <vipul> we can add a flag later to support this use case
21:39:49 <robertmyers> well, right now the command is plugable
21:39:51 <hub_cap> #agreed w/ grapex iter 1
21:39:57 <hub_cap> i say we move on
21:39:59 <robertmyers> so it can be easily changed
21:40:06 <hub_cap> tru robertmyers
21:40:08 <hub_cap> division of labor
21:40:11 <SlickNik> sounds good, thanks for clarification.
21:40:11 <hub_cap> whos doin what
21:40:35 * robertmyers will work on all the fun parts
21:40:39 <hub_cap> lol
21:40:46 <SlickNik> haha
21:40:58 <SlickNik> I'm working on the innobackupex pluggable part.
21:41:40 <robertmyers> right now I'm looking at the backup model to see if we can remove logic from the views
21:42:00 <robertmyers> like a test to see if a job is running
21:42:25 <vipul> anyone doing the streaming download?
21:42:30 <juice> robertmyers: what is your intention for work on the restore?
21:42:50 <juice> vipul: I think the download is a lot more straightforward than the upload
21:43:06 <robertmyers> well, I was thinking that we create a registry, and look up the restore process from the backup type
21:43:25 <juice> since swift handles the reassembly of the pieces … or at least that is what I read in the documentation
21:44:07 <juice> robertmyers: do we do that or just mirror the configuration that is done for the backup runner?
21:44:18 <robertmyers> juice:  yes if you download the manifest it pulls down all
21:44:36 <hub_cap> id say lets think we are doing 1x backup and restore type for now
21:44:36 <SlickNik> robertmyers, do we need a registry? It makes sense to have the restore type coupled with the backup_type, right? I don't see a case where I'd backup using one type, but restore using another...
21:44:45 <hub_cap> correct, for now
21:44:51 <hub_cap> in the future we will possibly have that
21:44:57 <robertmyers> well, since we are storing the type, one might change the setting over time
21:44:57 <SlickNik> hub_cap, that was my thinking  for now, at least.
21:44:59 <hub_cap> a user uploads a "backup" of their own db
21:45:29 <hub_cap> robertmyers: i dont think that we need that _now_ tho
21:45:42 <vipul> dont' we already have the use case of 2 types?
21:45:43 <hub_cap> that could happen in teh future, and we will code that when we thik about it happening
21:45:47 <vipul> xtrabackup and mysqldump
21:45:51 <robertmyers> No i'm talking about us changing the default
21:45:52 <grapex> I agree
21:46:11 <grapex> Let's just put in types now.
21:46:11 <hub_cap> vipul: but xtrabackup will have its own restore, and so will mysqldump right?
21:46:21 <hub_cap> grapex: types are in teh db i think arelady
21:46:22 <hub_cap> right?
21:46:27 <vipul> right but you need ot be able to look it up since it's stored in the DB entry
21:46:27 <robertmyers> so we store the backup_type in the db and us that to find the restore method
21:46:42 <hub_cap> so what wil this _restore method_ be
21:46:46 <vipul> it's really a mapping.. 'xtrabackup' -> 'XtraBackupRestorer'
21:46:48 <hub_cap> grab a file from swift and stream it in?
21:46:59 <robertmyers> vipul: yes
21:47:02 <hub_cap> the xtrabackup file cant be streamed back in?
21:47:06 <hub_cap> like a mysqldump file
21:47:17 <robertmyers> well, that part will be the same
21:47:29 <SlickNik> it needs to be streamed to xbstream to decompress it.
21:47:30 <juice> this discussion is do we use configuration to more or less statically choose the restore type or do we use some component that chooses it based off of the backup type?
21:47:33 <robertmyers> but the command to run will be differrent
21:47:39 <SlickNik> But then it has an extra step of running the "prepare"
21:47:49 <hub_cap> im confused
21:48:03 <hub_cap> w/ xtra do u not do, 1) start reading from swift, 2) start streaming to mysql
21:48:06 <juice> download then backup
21:48:09 <hub_cap> like u do w/ mysqldump
21:48:13 <juice> download is the same for either case
21:48:22 <hub_cap> mysql < dumpfile
21:48:26 <vipul> hub_cap: no... you don't pipe it in
21:48:32 <juice> backup process may vary yes?
21:48:34 <hub_cap> thats just terrible
21:48:38 <hub_cap> :)
21:48:42 <vipul> you have to 'prepare' which is a xtrabackup format -> data files
21:48:53 <vipul> then you start mysql
21:49:22 <SlickNik> hub_cap: db consistency isn't guaranteed unless you run prepare for xtrabackup.
21:49:26 <hub_cap> ok lets go w/ what robertmyers said then.... i thoguht they were tehe same
21:49:26 <juice> same download + different restore + same startup
21:49:36 <vipul> is it the same startup?
21:49:39 <vipul> one assumes mysqsl is up and running
21:49:47 <vipul> other assumes it's down.. and started after restore
21:50:23 <SlickNik> Seems like it's different. I think for mysqldump, mysql already needs to be running so it can process the logical dump.
21:50:27 <hub_cap> ok so lets not worry bout the details now that we _know_ its different
21:50:29 <SlickNik> vipul: yeah.
21:50:44 <hub_cap> lets just assume that we need to know how to restore 2 different types, and let robertmyers and co handle it accordingly
21:51:08 <vipul> robertmyers: so where do store the dump?
21:51:18 <vipul> assume that there is enough space in /vda?
21:51:31 <vipul> or actually you stream directly to mysql
21:51:46 <vipul> where xtrabackup streams directly to /var/lib/mysql
21:51:47 <robertmyers> good question, we may need to check for enough space
21:52:11 <robertmyers> we can see if streaming is possible
21:52:12 <vipul> mysql < swift download 'xx'?
21:52:19 <hub_cap> mysqldump shouldnt store the dump i think
21:52:25 <hub_cap> stream FTW
21:52:31 <SlickNik> I think you may be able stream it to mysql directly.
21:53:26 <hub_cap> lets assume yes for now, and if not, solve it
21:53:38 <hub_cap> i _know_ we can for mysqldump
21:54:12 <hub_cap> moving on?
21:54:27 <SlickNik> oh, one other clarification.
21:54:31 <hub_cap> kk
21:55:06 <SlickNik> I'll probably be done looking at the xtrabackup backup piece by today/tom.
21:55:24 <SlickNik> and so juice and I can start looking at restore.
21:55:36 <hub_cap> cool
21:55:41 <hub_cap> so on to notifications
21:55:43 <hub_cap> #topic Notifications Plan
21:55:52 <hub_cap> vipul: batter up!
21:55:52 <SlickNik> cool, thanks.
21:56:14 <robertmyers> I updated the wiki with our notifications
21:56:17 <vipul> SO thanks to robertmyers for filling out hte info in wiki
21:56:25 <vipul> i wanted to see where this is on your radar
21:56:26 <SlickNik> thanks #robertmyers.
21:56:31 <vipul> in terms of pushing up the code
21:56:36 <vipul> otherwise we can start adding it in
21:56:50 <vipul> now that we have a design for what we need to do..
21:56:55 <vipul> also wanted to see how you emit 'exists' events
21:56:57 <robertmyers> well, we have it all written... so we should make a patch
21:57:17 <grapex> vipul: Describe an "exists" event.
21:57:19 <vipul> do we have a periodic task or something?
21:57:30 <vipul> that goes through and periodically checks every resource in the DB
21:57:39 <grapex> vipul: We do something like that once a day.
21:57:41 <robertmyers> we have a periodic task that runs
21:57:49 <hub_cap> isint that _outside_ of reddwarf?
21:58:03 <robertmyers> yes
21:58:09 <hub_cap> would reddwarf _need_ to emit these?
21:58:15 <grapex> hub_cap: I don't think it should.
21:58:31 <vipul> Well it seems that every 'metering' implementation has exists evetns
21:58:50 <hub_cap> vipul: sure but somethign like cielometer shoudl do that
21:58:51 <vipul> so it seems anyone using reddwarf woul dhave to build one
21:59:00 <hub_cap> notifications are based on events that happen
21:59:00 <grapex> vipul cp16net: What if we put the exist daemon into contrib?
21:59:23 <hub_cap> i personally disagree w/ exists events too, so it may color my opinion :)
21:59:37 <grapex> hub_cap: Honestly, I don't like them at all either. :)
21:59:38 <vipul> contrib sounds fine to me grapex
22:00:05 <hub_cap> i dont think that nova sends exists events vipul
22:00:14 <vipul> it's easy enough to write one.. i just feel that's it's kinda necessary..
22:00:15 <hub_cap> it sends events based on a status change
22:00:23 <vipul> it might not be part of core.. not sure actually
22:00:23 <grapex> vipul: Ok. We're talking to this goofy external system, but we can find a way to seperate that code. If there's some synergy here I agree we should take advantage of it.
22:00:44 <hub_cap> its necessary for our billing system, but i dont think reddwarf needs to emit them. they are _very_ billing specific
22:01:03 <hub_cap> but im fine w/ it being in contrib
22:01:11 <grapex> So how does Nova emit these events?
22:01:22 <hub_cap> i dont think nova does grapex
22:01:23 <grapex> Or rather, where? In compute for each instance?
22:01:25 <vipul> nova does the same thing as reddwarf using oslo notifications
22:01:34 <grapex> events == notifications roughly
22:01:49 <grapex> to my mind at least
22:01:57 <imsplitbit> sorry guys I got pulled away but I'm back now
22:01:58 <vipul> yep agreed interchangeable
22:01:58 <hub_cap> https://wiki.openstack.org/wiki/SystemUsageData#compute.instance.exists:
22:02:16 <hub_cap> not sure if this is old or not
22:02:27 <grapex> Hmm
22:02:30 <vipul> there is a volume.exists.. so it's possible that there is something periodic
22:02:50 <grapex> vipul: We should probably talk more after the meeting
22:02:54 <hub_cap> if there are exists events _in_ the code, then im ok w/ adding them to our code
22:03:01 <hub_cap> but god i hate them
22:03:24 <vipul> grapex: sure
22:03:29 <grapex> My only concern with combining efforts is if we don't quite match we may end up complicating both public code and both our billing related efforts by adding something that doesn't quite fit.
22:03:54 <vipul> if we keep it similar to what robertmyers published i think it'll be fairly generic
22:04:01 <vipul> and we need them for our billing systme :)
22:04:32 <hub_cap> https://wiki.openstack.org/wiki/NotificationEventExamples#Periodic_Notifications:
22:04:39 <grapex> vipul: OK. He or I will be doing a pull request for notifications stuff very soon
22:04:43 <hub_cap> if nova already does this grapex maybe we are duplicating effort!
22:04:43 <grapex> within the week, hopefully
22:04:51 <vipul> grapex: sweet!
22:05:13 <hub_cap> but just cuz there is a wiki article doesnt mean its up to date
22:05:23 <SlickNik> Nice.
22:05:26 <vipul> hub_cap: even if nova emitted events, I think reddwarf should be the 'source of truth' in terms of timestamps and whatnot
22:05:27 <imsplitbit> hub_cap: if it's on the internet it's true
22:05:33 <grapex> vipul: Agreed
22:05:46 <hub_cap> vipul: ya i didnt mean use novas notification
22:05:49 <grapex> vipul: Working backwards from Nova to figure out what a "Reddwarf instance" should be could lead to issues...
22:05:52 <SlickNik> intruenet...
22:05:54 <hub_cap> i meant that we might be able to use their code :)
22:05:55 <vipul> yep
22:05:57 <hub_cap> to emit ours
22:05:59 <vipul> right.. ok
22:05:59 <grapex> hub_cap: Sorry for the misinterpretation
22:06:19 <hub_cap> no worries
22:06:23 <vipul> cool.. i think we're good on this one..
22:06:26 <vipul> let's get the base in..
22:06:30 <hub_cap> def
22:06:31 <vipul> and can discuss periodic
22:06:34 <vipul> if need be
22:06:42 <hub_cap> #link https://wiki.openstack.org/wiki/NotificationEventExamples
22:06:56 <hub_cap> #link https://wiki.openstack.org/wiki/SystemUsageData#compute.instance.exists:
22:06:59 <hub_cap> just in case
22:07:09 <vipul> awesome thanks
22:07:36 <hub_cap> #action grapex to lead the effort to get the code gerrit'ed
22:07:46 <hub_cap> #dammit that doestn give enough context
22:07:58 <vipul> there should be an #undo
22:08:02 <hub_cap> #action grapex to lead the effort to get the code gerrit'ed for notifications
22:08:07 <hub_cap> lol vipul ya
22:08:11 <SlickNik> or a #reaction
22:08:21 <hub_cap> #RootWrap
22:08:24 <hub_cap> lol
22:08:26 <hub_cap> #topic RootWrap
22:08:31 <hub_cap> so lets discuss
22:08:36 <vipul> ok this one is around Guest Agent..
22:08:41 <vipul> where we do 'sudo this' and 'sudo that'
22:08:59 <vipul> turns out we can't really run gueat agent wihtout giving the user sudoers privileg
22:09:05 <hub_cap> fo shiz
22:09:11 <SlickNik> #link https://wiki.openstack.org/wiki/Nova/Rootwrap
22:09:16 <vipul> so we should look at doing the root wrap thing there
22:09:20 * datsun180b listening
22:09:45 <hub_cap> yes but we shodl try to get it moved to common if we do that ;)
22:09:48 <hub_cap> rather than copying code
22:09:51 <vipul> I believe it's based on config where you specify everything you can do as root.. and only those things
22:09:58 <datsun180b> sounds about right
22:09:59 <vipul> hub_cap: it's alrady in oslo
22:10:00 <SlickNik> It's already in oslo, I believe.
22:10:20 <SlickNik> We need to move to a newer version of oslo (which might be painful) though
22:10:42 <vipul> datsun180b: I think the challenge will be to define that xml.. with every possible thing we want to be able to do
22:10:54 <vipul> but otherwise probably not too bad
22:10:58 <vipul> so maybe we BP this one
22:11:02 <datsun180b> we've got a little experience doing something similar internally
22:11:12 <hub_cap> vipul: YESSSSS
22:11:13 <SlickNik> I think we should bp it.
22:11:14 <datsun180b> i don't think a BP would hurt one bit
22:11:28 <SlickNik> I hate the fact that our user can sudo indiscriminately...
22:11:44 <vipul> yup makes hardening a bit difficult
22:12:08 <datsun180b> well if we've got sudo installed on the instance what's to stop us from deploying a shaped charge of a sudoers ahead of time
22:12:13 <datsun180b> spitballing here
22:12:36 <datsun180b> aren't there provisos for exactly what commands can and can't be run by someone granted powers
22:13:00 <vipul> you mean configure the sudoers to do exactly that?
22:13:05 <robertmyers> sudoers is pretty flexible :)
22:13:19 <hub_cap> sure but so is rootwrap :)
22:13:19 <datsun180b> right, if we know exactly what user and exactly what commands will be run
22:13:26 <hub_cap> and its "known" and it makes deployments easier
22:13:32 <hub_cap> 1 line in sudoers
22:13:33 <datsun180b> what, rootwrap over sudoers?
22:13:34 <hub_cap> rest in code
22:13:43 <hub_cap> yes rootwrap over sudoers
22:13:45 <vipul> probably should go with the common thing..
22:13:48 <hub_cap> since thats the way of the openstack
22:14:01 <SlickNik> yeah, but I don't think you can restrict arguments and have other restrictions.
22:14:09 <robertmyers> you can do line line in sudoers too, just a folder location
22:14:44 <hub_cap> sure u can do all this in sudoers
22:14:48 <hub_cap> and in rootwrap
22:14:54 <hub_cap> and prolly in _insert something here_
22:14:59 <robertmyers> options!
22:15:04 <hub_cap> but since the openstack community si going w/ rootwrap, we can too
22:15:18 * datsun180b almost mentioned apparmor
22:15:22 <juice> hub_cap: I think root_wraps justification is that it is easier to manage than sudoers
22:15:28 <SlickNik> lol@datsun180b
22:15:47 <hub_cap> yes juice and that its controlled in code vs by operations
22:16:01 <hub_cap> https://wiki.openstack.org/wiki/Nova/Rootwrap#Purpose
22:16:43 <vipul> +1 for root wrap
22:17:05 <hub_cap> +1 for rootwrap
22:17:15 <hub_cap> +100 for common things shared between projects
22:17:21 <vipul> So I can start a BP for this one.. hub_cap just need to get us a dummy bp
22:17:31 <SlickNik> +1 for rootwrap
22:17:53 <datsun180b> -1, the first step of rootwrap in that doc is an entry in sudoers!
22:18:07 <datsun180b> i'm not going to win but i'm voting on the record
22:18:08 <SlickNik> Ah, did we run out of dummies already?
22:18:16 <hub_cap> vipul: https://blueprints.launchpad.net/reddwarf/+spec/parappa-the-rootwrappah
22:18:27 <SlickNik> nice name
22:18:36 <grapex> hub_cap: that is the greatest blue print name of all time.
22:18:36 <hub_cap> datsun180b: read the rest of the doc then vote ;)
22:18:40 <hub_cap> :P
22:18:59 <datsun180b> it looks to be about par for nova
22:19:08 <hub_cap> there was _much_ discussion on going to rootwrap 2 version agao
22:19:10 <hub_cap> *ago
22:19:57 <hub_cap> movion on?
22:20:00 <SlickNik> Are we good with rootwrap?
22:20:03 <SlickNik> sounds good.
22:20:09 <vipul> oh yes.. i think we're good
22:20:20 <hub_cap> #topic quota tests w/ xml
22:20:29 <datsun180b> yes let's
22:20:41 <hub_cap> grapex: lets chat about that
22:20:47 <grapex> hub_cap: Sure.
22:21:10 <grapex> It looks like the "skip_if_xml" is still called for the quotas test, so that needs to be turned off.
22:21:21 <grapex> https://github.com/stackforge/reddwarf/blob/master/reddwarf/tests/api/instances.py , line 243
22:21:45 <vipul> i thought we were fixing this a week ago
22:21:49 <vipul> maybe that was for limits
22:22:25 <vipul> esp1: weren't you the one that had a patch?
22:22:34 <grapex> vipul: Quotas was fixed, but this was still turned off. I was gone at the time when the test fixes were merged, so I didn't see this until recently... sorry.
22:23:06 <esp1> vipul: yeah
22:23:25 <esp1> I can retest it if you like.
22:23:45 <esp1> but I'm pretty sure they were working last week.
22:24:13 <hub_cap> is the flag still in the code?
22:24:33 <grapex> esp1: The issue is in the second test run - right now if "skip_with_xml" is still called, the tests get skipped in XML mode. That one function needs to be removed.
22:24:36 <vipul> esp1: https://github.com/stackforge/reddwarf/blob/master/reddwarf/tests/api/instances.py#L243
22:24:38 <hub_cap> ps grapex, if u do github/path/to/file.py#LXXX it will take u there
22:24:44 <hub_cap> grapex: like that :P
22:24:49 <grapex> hub_cap: Thanks for the tip... good ole github
22:24:51 <esp1> grapex: ah ok.
22:25:12 <datsun180b> easy enough to reenable the tests, the hard part is making sure we still get all-green afterward
22:25:13 <esp1> I think I saw a separate bug logged for xml support in quotas
22:25:17 <grapex> esp1: Sorry, I thought I'd made a bug or blueprint or something for this explaining it but I can't find it now... *sigh*
22:25:25 <grapex> esp1: Maybe that was it
22:25:46 <esp1> grapex: I think you did.  I can pull it up and put it on my todo list
22:25:50 <grapex> So in general, any new feature should work with JSON and XML out of the gate... the skip thing was a temporary thing to keep the tests from failing.
22:25:57 <hub_cap> +1billion
22:26:09 <grapex> esp1: Cool.
22:26:15 <SlickNik> I agree. +1
22:26:18 <esp1> np
22:26:33 <esp1> #link https://bugs.launchpad.net/reddwarf/+bug/1150903
22:26:33 <grapex> esp1: One more tiny issue
22:26:39 <esp1> yep,
22:27:07 <grapex> esp1: that test needs to not be in the "GROUP_START", since other tests depend on that group but may not need quotas to work.
22:27:13 <grapex> Oh awesome, thanks for finding that.
22:27:43 <esp1> grapex: ah ok.  yeah I remember you or esmute talking about it.
22:27:57 <hub_cap> grapex: is there a doc'd bug for that?
22:28:00 <esp1> I'll take care of that bug too.  (maybe needs to be logged first)
22:28:12 <vipul> #action esp1 to re-enable quota tests w/XML support and remove them from GROUP_START
22:28:30 <hub_cap> perfect, we good on that issue?
22:28:43 <grapex> hub_cap: Looks like it.
22:28:53 <datsun180b> sounds good
22:28:54 <vipul> and delete the 'skip_if_xml' method :)
22:29:00 <vipul> all together
22:29:01 <esp1> right
22:29:04 <SlickNik> Sounds good
22:29:06 <esp1> sure why not.
22:29:07 <SlickNik> Thanks esp1
22:29:30 <hub_cap> baby in ihandss
22:29:43 <hub_cap> sry
22:29:44 <esmute> what is the xml support?
22:29:45 <hub_cap> #topic Actions / Action Events
22:29:52 <SlickNik> does it improve your spelling? :)
22:30:02 <hub_cap> lol no
22:30:05 <hub_cap> its terrible either way
22:30:26 <vipul> esmute: the python client can do both json and xml.. we run tests twice, once with xml turned on and once without
22:30:36 <hub_cap> so i thoguth of 3 possible ways to do actions and action_events
22:30:51 <esp1> esmute: we support both JSON and XML in the Web Service API
22:31:00 <hub_cap> 1) pass a async response uuid back to asyn events and poll based on that (our dnsaas does this for some events)
22:31:09 <hub_cap> lemme find the email and paste it
22:31:19 <esmute> thanks vipul, esp1... is the conversion happening in the client?
22:31:41 <hub_cap> 1) Async callbacks - a la DNS. Send back a callback uuid that a user can query against a common interface. This is more useful for things that do not return an id, such as creating a database or a user. See [1] for more info. For items that have a uuid, it would make more sense to just use that uuid.
22:31:46 <hub_cap> 2) HEAD /whatever/resource/id to get the status of that object. This is like the old cloud servers call that would tell u what state your instance was whilst building.
22:31:50 <hub_cap> 3) NO special calls. Just provide feedback on the GET calls for a given resource. This would work for both items with a uuid, and items without (cuz a instance has a uuid and u can append a username or dbname to it).
22:31:54 <hub_cap> [1] http://docs.rackspace.com/cdns/api/v1.0/cdns-devguide/content/sync_asynch_responses.html
22:31:56 <esp1> esmute: sorta I'll walk you through it tomorrow :)
22:32:01 <esmute> cool
22:32:04 <hub_cap> i think that #3 was the best option for uniformity
22:32:27 <hub_cap> does anyone feel same/different on that?
22:32:54 <vipul> wait so this is how does the user determine the status of an action (like instance creation state)?
22:33:03 <vipul> 3 is what we do right
22:33:04 <vipul> today
22:33:08 <hub_cap> correct
22:33:13 <hub_cap> but it gives no status
22:33:16 <hub_cap> err
22:33:19 <hub_cap> it gives no description
22:33:21 <hub_cap> or failure info
22:33:22 <grapex> hub_cap: Do you mean uniformity between other OS apis?
22:33:36 <hub_cap> grapex: uniforimity to what nova does
22:33:45 <hub_cap> and uniformity as in, itll work for actions that dont have uuids (users/dbs)
22:34:00 <grapex> hub_cap: I think we should go for #1. I know it isn't the same as nova but I think it would really help if we could query actions like that.
22:34:10 <grapex> Eventually some other project will come up with a similar idea.
22:34:39 <hub_cap> my thought is to start w/ #3
22:34:51 <hub_cap> since itll be the least work and itll provide value
22:34:53 <vipul> in #1, the user is providing a callback (url or something)?
22:34:53 <SlickNik> So, if I  understand correctly 3 is to extend the GET APIs that we have today to also provide the action description.
22:34:56 <grapex> hub_cap: That makes sense, as long as #1 is eventually possible
22:34:59 <hub_cap> essentially #3 is #1
22:35:03 <hub_cap> but w/ less data
22:35:09 <hub_cap> thats why i was leaning toward #3
22:35:21 <grapex> Yeah, sorry... if we have unique action IDs in the db we can eventually add that and figure out how the API should look
22:35:22 <vipul> #1 seems more like a PUSH model.. where reddwarf notifies
22:35:26 <hub_cap> the only reason u ened a callback url in dns aas is cuz they dont control the ID
22:35:56 <hub_cap> well they are all polling honestly but i thikn i see yer point vipul
22:36:30 <hub_cap> i honestly dislike the "callback" support
22:36:39 <hub_cap> becasue whats the diff between these scenarios
22:36:45 <hub_cap> 1) create instance, get a uuid for the instance
22:36:52 <hub_cap> crap let me start over
22:37:03 <hub_cap> 1) create instance, get a uuid for the instance, poll GET /instance/uuid for status
22:37:25 <hub_cap> 2) create instance, get new callback uuid and uuid for instance, poll GET /actions/callback_uuid for status
22:37:32 <hub_cap> otehr thatn 2 is more work :P
22:37:40 <vipul> that's not very clean
22:37:54 <vipul> if you're going to do 2) then we should be pushing to them.. invoking the callback
22:38:05 <hub_cap> ya and we wont be doing that anytime soon :)
22:38:18 <hub_cap> all in favor for the "Easy" route, #3 above?
22:38:27 <vipul> I
22:38:29 <vipul> Aye
22:38:31 <grapex> I'm sorry, I'm confused.
22:38:33 <hub_cap> eye
22:38:36 <vipul> eye
22:38:40 <hub_cap> lol vipul
22:38:50 <hub_cap> grapex: gohead
22:38:56 <grapex> #2 - you just mean the user would need to poll to get the status?
22:39:04 <hub_cap> correct just like dns
22:39:21 <grapex> How would 1 and 2 map to things like instance resizes?
22:39:40 <hub_cap> http://docs.rackspace.com/cdns/api/v1.0/cdns-devguide/content/sync_asynch_responses.html
22:40:01 <hub_cap> GET /instance/uuid vs GET /instance/callback_uuid_u_got_from_the_resize
22:40:17 <hub_cap> GET instance/uuid alreayd says its in resize
22:40:24 <hub_cap> this will jsut give u more info if something goes wrong
22:40:32 <vipul> really the diffence is GET /resourceID vs GET /JobID
22:40:33 <hub_cap> which was the original point of actions in the first place
22:40:38 <SlickNik> Honestly the only reason I'd consider 2 would be if there were actions that are mapped to things other than resources (or across multiple resource).
22:40:51 <grapex> SlickNik: That's my concern too.
22:41:08 <hub_cap> we cross that bridge when we come to it
22:41:21 <hub_cap> ^ ^ my favorite phrase :)
22:41:24 <grapex> Ok- as long as we can start with things as they are today
22:41:27 <vipul> do we want to consider everything a Job?
22:41:32 <grapex> and each action has its own unique ID
22:41:39 <hub_cap> grapex: it does/will
22:41:42 <grapex> Or maybe a "task"?
22:41:51 <grapex> That our taskmanager can "manage"? :)
22:41:54 <vipul> heh
22:42:16 <hub_cap> lol grapex
22:42:19 <SlickNik> heh
22:42:25 <grapex> Actually live up to it's name finally instead of being something we should've named "reddwarf-api-thread-2"
22:42:26 <hub_cap> ehe
22:42:40 <vipul> that might have been the intention.. but we don't do a whole lot of managing task states
22:42:44 <hub_cap> reddwarf-handle-async-actions-so-the-api-can-return-data
22:42:49 <vipul> we record a task id i believe.. but that's it
22:42:56 <grapex> vipul: Yeah its pretty silly.
22:43:14 <grapex> Well I'm up for calling it action or job or task or whatever.
22:43:21 <grapex> hub_cap: Nova calls it action already, right?
22:43:32 <hub_cap> nova calls it instance_action
22:43:37 <hub_cap> cuz it only applies to instances
22:43:38 <vipul> gross
22:43:43 <grapex> Ah, while this would be for anything.
22:43:52 <hub_cap> im calling it action cuz its _any_ action
22:44:00 <hub_cap> likely for things like create user
22:44:01 <grapex> task would kind of make sense. But I'm game for any name.
22:44:07 <hub_cap> ill do instance uuid - username
22:44:16 <hub_cap> as a unique id for it
22:44:19 <vipul> backup id
22:44:25 <hub_cap> lets call it poopoohead then
22:44:30 <hub_cap> and poopoohead_actions
22:44:51 <SlickNik> lol!
22:44:52 <juice> hub_cap: hope that's not inspired by holding a baby in your hands
22:44:54 <grapex> hub_cap: I think we can agree on that.
22:44:55 <juice> sounds like a mess
22:44:57 <hub_cap> HAHA
22:45:01 <hub_cap> nice yall
22:45:12 <vipul> hub_cap does it only apply to async things?
22:45:23 <hub_cap> likely
22:45:32 <hub_cap> since sync things will return a error if it happens
22:45:47 <hub_cap> but it can still record sync things if we even have any of those
22:45:50 <hub_cap> that arent GET calls
22:45:59 <hub_cap> basically anything that mods a resource
22:46:18 <grapex> So I'm down for going route #3, which if I understand it means we really won't change the API at all but just add this stuff underneath
22:46:37 <vipul> i'm assuming we add a 'statusDetail' to the response of every API?
22:46:55 <hub_cap> to the GET calls vipul likely
22:47:03 <grapex> because it seems like this gets us really close to tracking, which probably everyone is salivating for, and we may want to change the internal DB schema a bit over time before we offer an API for it.
22:47:10 <SlickNik> only the async GETs, I thought.
22:47:33 <hub_cap> def grapex
22:47:43 <grapex> So instance get has a "statusDetail" as well?
22:48:03 <hub_cap> likely all GET's will have a status/detail
22:48:05 <grapex> So there's status and then "statusDetail"? That implies a one to one mapping with resources and actions.
22:48:17 <hub_cap> maybe not if it is not a failure
22:48:18 <grapex> Assuming "statusDetail" comes from the action info in the db.
22:48:37 <vipul> that's my understanding as well grapex
22:48:59 <hub_cap> it implies a 1x1 mapping between a resource and its present state
22:49:14 <hub_cap> it wont tel u last month your resize failed
22:49:23 <hub_cap> itll tell you your last resize failed if its in failure state
22:49:27 <grapex> hub_cap: But that data will still be in the db, right?
22:49:35 <hub_cap> fo shiiiiiiiz
22:49:37 <SlickNik> Would a flavor GET need a status/detail?
22:49:53 <vipul> prolly not since that would be static data
22:49:59 <hub_cap> correct
22:50:00 <grapex> Ok. Honestly I'm ok, although I think statusDetail might look a little gross.
22:50:03 <grapex> For instance
22:50:06 <grapex> if a resize fails
22:50:14 <grapex> today it goes to ACTIVE status and gets the old flavor id again.
22:50:32 <grapex> So in that case, would "statusDetail" be something like "resize request rejected by Nova!" or something?
22:51:09 <grapex> Because that sounds more like a "lastActionStatus" or something similar. "statusDetail" implies its currently in that status rather than being historical.
22:51:18 <hub_cap> that is correct
22:51:22 <vipul> i guess that could get itneresting if you have two requests against a single resource
22:51:24 <hub_cap> let me look @ how servers accomplishes this
22:51:49 <vipul> you may lose the action you care about
22:51:52 * hub_cap puts a cap on meeting
22:51:59 <hub_cap> lets discuss this on irc tomorrow
22:52:05 <hub_cap> its almost 6pm in tx
22:52:23 <hub_cap> and i have to go to the bakery to get some break b4 it closes
22:52:33 <hub_cap> i will say that there is more thinking that needs to go into this bp
22:52:39 <SlickNik> Sounds good. Want to think about this a bit more as well.
22:52:41 <hub_cap> and ill add to the bp( which is lackig now)
22:52:47 <hub_cap> SlickNik: agreed
22:52:56 <grapex> Maybe we should discuss moving the meeting an hour earlier
22:53:01 <vipul> yup good talk
22:53:12 <datsun180b> not a bad idea
22:53:14 <grapex> A few people had to go home halfway through today since it's been raining hard here
22:53:26 <vipul> a little rain gets in the way?
22:53:35 <vipul> i'm game for 1pm PST start
22:53:38 <grapex> Which in Texas happens so rarely it can be an emergency
22:53:40 <SlickNik> I'd be up with that, too
22:53:40 <datsun180b> this is austin, rain is a rarer sight than UFOs
22:53:41 <hub_cap> vipul: lol its tx
22:53:47 <hub_cap> rain scares texans
22:53:51 <SlickNik> It's probably like snow in Seattle. :)
22:53:58 <juice> or sun
22:53:59 <vipul> or sun
22:54:01 <vipul> damn
22:54:03 <vipul> you beat me to it
22:54:03 <SlickNik> nice
22:54:05 <hub_cap> HAHA
22:54:06 <SlickNik> same time
22:54:33 <vipul> this room always seems empty before us
22:54:39 <vipul> so let's make it happen for next week
22:54:44 <hub_cap> LOL we are the only people who use it ;)
22:54:47 <vipul> grapex: we need to talk about the prezo
22:54:53 <grapex> vipul: One person on the team said they needed to go home to roll up the windows on their other car which had been down for the past five years.
22:55:01 <vipul> lol
22:55:04 <hub_cap> grapex: hahahaa
22:55:11 <SlickNik> lolol
22:55:38 <hub_cap> so end meeting?
22:55:42 <datsun180b> so then next week, meeting moved to 3pm CDT/1pm PDT?
22:55:47 <grapex> Real quick
22:55:54 <grapex> We're all cool if hub_cap goes forward on actions right?
22:55:58 <grapex> iter 1 is just db work
22:56:05 <grapex> and we can raise issues during the pull request if there are any
22:56:10 <hub_cap> yup grapex thats what im gonna push the first iter
22:56:12 <vipul> Yea, let's do it
22:56:14 <grapex> Cool.
22:56:17 <SlickNik> I'm fine with that.
22:56:19 <SlickNik> +1
22:56:21 <grapex> I'm looking forward to it. :)
22:56:42 <SlickNik> Sweetness.
22:56:54 <hub_cap> aight then
22:56:56 <hub_cap> #endmeeting