*** sacharya has joined #openstack-meeting-alt | 00:00 | |
*** bdpayne has quit IRC | 00:21 | |
*** yidclare has quit IRC | 01:08 | |
*** sarob has joined #openstack-meeting-alt | 01:44 | |
*** esp1 has joined #openstack-meeting-alt | 02:55 | |
*** esp1 has left #openstack-meeting-alt | 03:04 | |
*** bdpayne has joined #openstack-meeting-alt | 03:24 | |
*** bdpayne has quit IRC | 03:26 | |
*** esp1 has joined #openstack-meeting-alt | 03:28 | |
*** sarob has quit IRC | 04:13 | |
*** SergeyLukjanov has joined #openstack-meeting-alt | 04:15 | |
*** sacharya has quit IRC | 04:57 | |
*** amyt has joined #openstack-meeting-alt | 05:05 | |
*** esp1 has quit IRC | 05:12 | |
*** amyt has quit IRC | 05:12 | |
*** amyt has joined #openstack-meeting-alt | 05:13 | |
*** amyt has quit IRC | 05:22 | |
*** rmohan has quit IRC | 05:35 | |
*** rmohan has joined #openstack-meeting-alt | 05:36 | |
*** SergeyLukjanov has quit IRC | 05:40 | |
*** nimi has left #openstack-meeting-alt | 07:34 | |
*** nimi has quit IRC | 07:34 | |
*** openstack has joined #openstack-meeting-alt | 07:41 | |
*** ChanServ sets mode: +o openstack | 07:41 | |
*** SergeyLukjanov has joined #openstack-meeting-alt | 07:47 | |
*** SergeyLukjanov has quit IRC | 07:57 | |
*** SergeyLukjanov has joined #openstack-meeting-alt | 08:56 | |
*** SergeyLukjanov has quit IRC | 10:01 | |
*** SergeyLukjanov has joined #openstack-meeting-alt | 10:03 | |
*** dhellmann has quit IRC | 11:50 | |
*** rnirmal has joined #openstack-meeting-alt | 12:17 | |
*** SergeyLukjanov has quit IRC | 12:44 | |
*** SergeyLukjanov has joined #openstack-meeting-alt | 13:07 | |
*** dhellmann has joined #openstack-meeting-alt | 13:33 | |
*** SergeyLu_ has joined #openstack-meeting-alt | 13:40 | |
*** SergeyLukjanov has quit IRC | 13:41 | |
*** SergeyLu_ is now known as SergeyLukjanov | 13:41 | |
*** SergeyLukjanov has quit IRC | 13:42 | |
*** sacharya has joined #openstack-meeting-alt | 13:46 | |
*** jcru has joined #openstack-meeting-alt | 13:49 | |
*** SergeyLukjanov has joined #openstack-meeting-alt | 13:54 | |
*** SergeyLukjanov has quit IRC | 13:56 | |
*** djohnstone has joined #openstack-meeting-alt | 14:15 | |
*** cloudchimp has joined #openstack-meeting-alt | 14:21 | |
*** SergeyLukjanov has joined #openstack-meeting-alt | 14:31 | |
*** rnirmal_ has joined #openstack-meeting-alt | 14:43 | |
*** rnirmal_ has joined #openstack-meeting-alt | 14:44 | |
*** jcru is now known as jcru|away | 14:46 | |
*** cp16net is now known as cp16net|away | 14:46 | |
*** rnirmal has quit IRC | 14:47 | |
*** rnirmal_ is now known as rnirmal | 14:47 | |
*** sdake_ has quit IRC | 14:49 | |
*** sacharya has quit IRC | 14:55 | |
*** jcru|away is now known as jcru | 14:59 | |
*** cloudchimp has quit IRC | 15:04 | |
*** amyt has joined #openstack-meeting-alt | 15:10 | |
*** dhellmann has quit IRC | 15:16 | |
*** SergeyLukjanov has quit IRC | 15:56 | |
*** SergeyLukjanov has joined #openstack-meeting-alt | 15:58 | |
*** sacharya has joined #openstack-meeting-alt | 16:05 | |
*** amyt has quit IRC | 16:20 | |
*** amyt has joined #openstack-meeting-alt | 16:20 | |
*** vipul is now known as vipul|away | 16:24 | |
*** vipul|away is now known as vipul | 16:24 | |
*** rnirmal has quit IRC | 16:29 | |
*** bdpayne has joined #openstack-meeting-alt | 16:33 | |
*** dhellmann has joined #openstack-meeting-alt | 16:34 | |
*** SergeyLukjanov has quit IRC | 16:37 | |
*** rmohan has quit IRC | 16:42 | |
*** rmohan has joined #openstack-meeting-alt | 16:42 | |
*** esp1 has joined #openstack-meeting-alt | 16:42 | |
*** esp1 has left #openstack-meeting-alt | 16:47 | |
*** rmohan has quit IRC | 16:48 | |
*** rmohan has joined #openstack-meeting-alt | 16:50 | |
*** yidclare has joined #openstack-meeting-alt | 17:02 | |
*** rnirmal has joined #openstack-meeting-alt | 17:03 | |
*** rnirmal has quit IRC | 17:04 | |
*** rnirmal has joined #openstack-meeting-alt | 17:08 | |
*** sacharya has quit IRC | 17:08 | |
*** rmohan has quit IRC | 17:15 | |
*** rmohan has joined #openstack-meeting-alt | 17:18 | |
*** sdake_ has joined #openstack-meeting-alt | 17:56 | |
*** SergeyLukjanov has joined #openstack-meeting-alt | 18:01 | |
*** yidclare has quit IRC | 18:06 | |
*** cp16net|away is now known as cp16net | 18:07 | |
*** yidclare has joined #openstack-meeting-alt | 18:11 | |
*** SlickNik has joined #openstack-meeting-alt | 18:22 | |
*** SlickNik has left #openstack-meeting-alt | 18:23 | |
*** sacharya has joined #openstack-meeting-alt | 18:32 | |
*** sarob has joined #openstack-meeting-alt | 18:34 | |
*** vipul is now known as vipul|away | 18:49 | |
*** vipul|away is now known as vipul | 18:49 | |
*** rmohan has quit IRC | 18:51 | |
*** rmohan has joined #openstack-meeting-alt | 18:52 | |
*** vipul is now known as vipul|away | 18:53 | |
*** vipul|away is now known as vipul | 18:53 | |
*** yidclare has quit IRC | 18:59 | |
*** jcru has quit IRC | 19:00 | |
*** yidclare has joined #openstack-meeting-alt | 19:02 | |
*** vipul is now known as vipul|away | 19:08 | |
*** jcru has joined #openstack-meeting-alt | 19:09 | |
*** sarob has quit IRC | 19:17 | |
*** SergeyLukjanov has quit IRC | 19:22 | |
*** SergeyLukjanov has joined #openstack-meeting-alt | 19:25 | |
*** heckj has joined #openstack-meeting-alt | 19:26 | |
*** sacharya has quit IRC | 19:27 | |
*** cp16net is now known as cp16net|away | 19:30 | |
*** cp16net|away is now known as cp16net | 19:31 | |
*** vipul|away is now known as vipul | 19:56 | |
*** cp16net is now known as cp16net|away | 20:14 | |
*** cp16net|away is now known as cp16net | 20:15 | |
*** rnirmal has quit IRC | 20:19 | |
*** yidclare has quit IRC | 20:25 | |
*** yidclare has joined #openstack-meeting-alt | 20:27 | |
*** SergeyLukjanov has quit IRC | 20:34 | |
*** sdake_ has quit IRC | 20:47 | |
*** hub_cap has joined #openstack-meeting-alt | 20:52 | |
*** sdake_ has joined #openstack-meeting-alt | 20:52 | |
*** jcru has quit IRC | 20:54 | |
*** esp1 has joined #openstack-meeting-alt | 20:54 | |
*** robertmyers has joined #openstack-meeting-alt | 20:54 | |
*** jcru has joined #openstack-meeting-alt | 20:54 | |
*** esp1 has joined #openstack-meeting-alt | 20:54 | |
*** datsun180b has joined #openstack-meeting-alt | 20:58 | |
hub_cap | #startmeeting reddwarf | 20:59 |
---|---|---|
openstack | Meeting started Tue Apr 2 20:59:35 2013 UTC. The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot. | 20:59 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 20:59 |
*** openstack changes topic to " (Meeting topic: reddwarf)" | 20:59 | |
openstack | The meeting name has been set to 'reddwarf' | 20:59 |
datsun180b | hello | 20:59 |
robertmyers | hello | 20:59 |
hub_cap | as usual, >>> time.sleep(120) | 20:59 |
djohnstone | hi | 21:00 |
*** SlickNik has joined #openstack-meeting-alt | 21:00 | |
vipul | hola | 21:00 |
*** saurabhs has joined #openstack-meeting-alt | 21:00 | |
SlickNik | hey there | 21:00 |
annashen | hi | 21:00 |
esp1 | hello | 21:01 |
cp16net | present | 21:01 |
hub_cap | #link https://wiki.openstack.org/wiki/Meetings/RedDwarfMeeting | 21:01 |
hub_cap | #link http://eavesdrop.openstack.org/meetings/reddwarf/2013/reddwarf.2013-03-26-20.59.html | 21:01 |
juice | greetings | 21:01 |
imsplitbit | greets | 21:01 |
imsplitbit | we ready to get this party started? | 21:02 |
hub_cap | yup | 21:02 |
hub_cap | #topic Action items | 21:02 |
*** openstack changes topic to "Action items (Meeting topic: reddwarf)" | 21:02 | |
* juice is doing a shot | 21:02 | |
hub_cap | nice | 21:02 |
hub_cap | someone snag grapex | 21:02 |
vipul | where's the alcohol? | 21:02 |
SlickNik | Let's do it. | 21:02 |
datsun180b | he's trying to resolve his connection | 21:02 |
hub_cap | and smack him w/ a trout | 21:02 |
hub_cap | ok datsun180b, still smack him w a trout, hes on a mac | 21:03 |
hub_cap | so ill skip him for now | 21:03 |
hub_cap | my action item is next, and i actually added it to the agenda today | 21:03 |
hub_cap | so ill skip it till then (action / action items) | 21:03 |
hub_cap | vipul: yer next, patch for backUps to database-api | 21:03 |
vipul | yea, haven't gotten around to it | 21:04 |
vipul | promise to do it this week! | 21:04 |
vipul | #action Vipul to publish backup API to database-api | 21:04 |
hub_cap | cool, can u re-action it | 21:04 |
hub_cap | so backupRef vs backupUUID | 21:04 |
hub_cap | i believe we decided backupRef was fine right, but it could jsut be a uuid no biggie? | 21:04 |
hub_cap | oh nm | 21:05 |
hub_cap | i emailed jorge | 21:05 |
hub_cap | he never emailed me back | 21:05 |
hub_cap | he hates me | 21:05 |
SlickNik | #action SlickNik to finish publishing security groups API to database-api | 21:05 |
hub_cap | ill send him another email | 21:05 |
hub_cap | <3 SlickNik | 21:05 |
SlickNik | I remember I started on that one, but I still have a couple of changes to it. | 21:05 |
hub_cap | #action get ahold of jOrGeW to make sure about backupRef vs backupUUID | 21:05 |
vipul | SlickNik yea i think that was abandoned | 21:05 |
*** sdake_ has quit IRC | 21:05 | |
vipul | everuy openstack project seems to do this shit differently | 21:05 |
vipul | in terms of uuid vs ref | 21:06 |
SlickNik | I have the change, just haven't gotten around to making the couple of changes I wanted. | 21:06 |
hub_cap | i know vipul... | 21:06 |
hub_cap | there is no standard | 21:06 |
hub_cap | its terrible | 21:06 |
SlickNik | yeah, sucks. | 21:06 |
vipul | no wonder George Reese is always putting the smack down | 21:06 |
hub_cap | personally, i think of a ref being a remote system | 21:06 |
hub_cap | oh god nooooooo | 21:06 |
vipul | he's on my twitter feed | 21:06 |
vipul | big time complaints :) | 21:06 |
SlickNik | Who's George Reese? | 21:06 |
vipul | enstratus | 21:07 |
hub_cap | SlickNik: search your openstack mail | 21:07 |
SlickNik | And why are his peanut butter cups awful? | 21:07 |
*** grapex has joined #openstack-meeting-alt | 21:07 | |
hub_cap | grapex: !!! | 21:07 |
grapex | hub_cap: That was awesome | 21:07 |
SlickNik | hub_cap: will do… | 21:07 |
grapex | What's up? | 21:07 |
hub_cap | ok so back to ref vs uuid, in my brain | 21:07 |
hub_cap | a ref is remote | 21:07 |
hub_cap | and a uuid is local | 21:07 |
datsun180b | lacking context, it seems the 'uu' part of 'uuid' disagrees with that | 21:08 |
robertmyers | nice | 21:08 |
vipul | heh... had to bring that up | 21:08 |
hub_cap | sorry my definition of local is not correct | 21:08 |
hub_cap | local being controlled by reddwarf | 21:08 |
hub_cap | as opposed to controlled by a 3rd party system | 21:08 |
hub_cap | yes uu still applies :) | 21:08 |
vipul | in that case we should go with UUID then | 21:09 |
hub_cap | well thats what im leaning toward | 21:09 |
hub_cap | but let me just touch back w/ jorge | 21:09 |
hub_cap | its fairly easy to change right? | 21:09 |
vipul | Yea pretty minor | 21:09 |
robertmyers | how abut backupID? | 21:09 |
vipul | it's ref now | 21:09 |
robertmyers | not UUID | 21:09 |
hub_cap | Id might be better than uuid | 21:09 |
vipul | yea that's probably better | 21:09 |
SlickNik | It's ref now, but easily changed... | 21:09 |
hub_cap | so BackUpId | 21:09 |
hub_cap | :P | 21:10 |
SlickNik | I like backupId | 21:10 |
grapex | Sorry gang, my IRC client had a visual glitch- are we talking about using ID or UUID for what's now known in the client as "backupRef?" | 21:10 |
vipul | k, we'll wait for Jorje? | 21:10 |
vipul | jorge | 21:10 |
hub_cap | ya but lets lean heavily toward backupId | 21:10 |
SlickNik | yes, grapex | 21:10 |
grapex | Ok | 21:10 |
vipul | steveleon: can we change that now then? | 21:10 |
vipul | esmute ^ | 21:10 |
hub_cap | grapex: lets go back to yoru action item | 21:10 |
*** sdake_ has joined #openstack-meeting-alt | 21:10 | |
hub_cap | xml lint integration in reddwarf grapex | 21:10 |
grapex | Sorry, still nothing so far. Had a conference last week which threw me off track. | 21:11 |
hub_cap | fo sure, plz re-action it | 21:11 |
grapex | Ok | 21:12 |
hub_cap | ok so lets say we are done w action items | 21:12 |
esmute | yeah we can change it.. | 21:12 |
esmute | i will have to change the rd-client that just got merged a few hours ago too | 21:12 |
hub_cap | #topic Status of CI/jenkins/int-tests | 21:12 |
*** openstack changes topic to "Status of CI/jenkins/int-tests (Meeting topic: reddwarf)" | 21:12 | |
hub_cap | esmute: okey | 21:12 |
vipul | esmute: thanks | 21:12 |
vipul | So not having a int-test gate has been killing us it seems | 21:13 |
hub_cap | vipul: its all yo mang | 21:13 |
vipul | SlickNik and I are working on a jenkisn here at HP that will listen to gerrit triggers | 21:13 |
vipul | and give a +1 / -1 vote | 21:13 |
hub_cap | ok it can be nonvoting too | 21:13 |
vipul | got it sort of working, but the plugin we use to spin up a VM need some love | 21:13 |
hub_cap | if thats hard ro anything | 21:13 |
hub_cap | AHH | 21:13 |
datsun180b | should it be more like check/cross then? | 21:13 |
hub_cap | are u using the jclouds one? | 21:13 |
vipul | no, home grown | 21:14 |
hub_cap | datsun180b: j/y | 21:14 |
vipul | jruby thingie | 21:14 |
datsun180b | considering that builds that don't pass int-tests aren't worth shipping? | 21:14 |
hub_cap | oh fun vipul | 21:14 |
SlickNik | nope, it's something that one of the folks here came up with. | 21:14 |
hub_cap | datsun180b: correct | 21:14 |
vipul | Yea but i think we want voting | 21:14 |
hub_cap | SlickNik: do you guys adhere to the openstack api internally? if so the jclouds one is bomb | 21:14 |
vipul | hub_cap: We shoudl try the jclouds one.. honestly haven't event tried it | 21:14 |
SlickNik | Yeah, it needs a couple of changes to be able to pass the gerrit id's from the trigger to the new instance it spins up. | 21:15 |
hub_cap | its great | 21:15 |
hub_cap | itll spawn a server, if it fails itll spawn a new one | 21:15 |
hub_cap | it sets up keys to ssh | 21:15 |
hub_cap | it does a lot of work for u | 21:15 |
SlickNik | hub_cap, do you have a link to the jclouds plugin you speak of? | 21:15 |
vipul | one other thing missing is checking to see if tests passed or not.. | 21:15 |
hub_cap | https://wiki.jenkins-ci.org/display/JENKINS/JClouds+Plugin | 21:15 |
hub_cap | #link https://wiki.jenkins-ci.org/display/JENKINS/JClouds+Plugin | 21:15 |
vipul | currently can run them, but no check to see if it worked properly | 21:15 |
SlickNik | hub_cap: thanks! | 21:15 |
datsun180b | grep for OK (Skipped=) | 21:16 |
hub_cap | vipul: ahh, i think that the jclouds plugin will fix that | 21:16 |
datsun180b | at minimum | 21:16 |
vipul | yea, that's something we're trying to get added to our jruby plugin | 21:16 |
hub_cap | itll fail if the int-tests emit a error code | 21:16 |
vipul | hub_cap: Jclouds does that already? | 21:16 |
datsun180b | even better | 21:16 |
hub_cap | well u just tell jclouds to exectue X on a remote system | 21:16 |
hub_cap | and if X fails, it fails teh job | 21:16 |
vipul | hub_cap: so jenkins plugin is building ajenkins slave? or arbitrary vm | 21:17 |
vipul | cuz i don't care for the jenkins slave.. just want a vm | 21:17 |
hub_cap | vipul: thre is not much difference between them, but it can easily do arbitrary vm | 21:17 |
grapex | vipul: Does StackForge CI use the jclouds plugin and make it an official Jenkins slave or does it just create a VM without the jenkins agent? | 21:18 |
datsun180b | i like the idea of int-tests running on a machine that doesn't persist between builds and so doesn't rely on manual monkeying for tests to work | 21:18 |
vipul | they have a pool of servers grapex | 21:18 |
hub_cap | it _is_ a slave in terms of jenkins but thats convenient for making sure the node comes online etc | 21:18 |
vipul | not sure exactly how they allocate them | 21:18 |
esp1 | datsun180b: mee too. | 21:18 |
*** cloudchimp has joined #openstack-meeting-alt | 21:18 | |
SlickNik | They have home-grown scripts to allocate them… | 21:18 |
vipul | datsun180b: yep, fresh instance each time | 21:18 |
hub_cap | anyhoo, i say look into it | 21:19 |
grapex | SlickNik: home-grown instead of using the jenkins agent? | 21:19 |
hub_cap | it may or may not | 21:19 |
hub_cap | work for u | 21:19 |
grapex | I'm not pro or con Jenkins agent btw, just curious | 21:19 |
hub_cap | grapex: the ci team? ya its all homegrown :) | 21:19 |
vipul | Yea... so still a WIP.. i think we need to give this a bit more time.. | 21:19 |
vipul | BUT we're getting close | 21:19 |
vipul | last week all tests passed | 21:19 |
hub_cap | hell yes | 21:20 |
hub_cap | even if its nonvoting and it triggers and we can just look @ it b4 approving | 21:20 |
hub_cap | thats a good step 1 | 21:20 |
hub_cap | lets just get it runnin | 21:20 |
SlickNik | We get the voting part from the gerrit trigger. | 21:20 |
hub_cap | so we can stop comitting code that fails | 21:20 |
vipul | yep, can't wait | 21:20 |
datsun180b | +40 | 21:20 |
cp16net | +! | 21:20 |
hub_cap | im fine w/ it always voting +1 since it doesnt tell if it passes or fails yet | 21:20 |
SlickNik | And I've set up the accounts to be able to connect to gerrit. | 21:21 |
hub_cap | lets just get a link put up | 21:21 |
vipul | OH and need to do openID integration | 21:21 |
esp1 | yeah, we probably need to run the int-tests locally before checking | 21:21 |
hub_cap | rather than taking it to the finish line fully working | 21:21 |
vipul | #action Vipul and SlickNik to update on status of VM Gate | 21:21 |
esp1 | I meant checking in | 21:21 |
hub_cap | lets get a baton pass by getting it running for each iteration asap ;) | 21:21 |
SlickNik | agreed hub_cap | 21:21 |
SlickNik | We need this goodness! :) | 21:21 |
hub_cap | yup | 21:22 |
hub_cap | #action stop eating skittles jelly beans, they are making me sick | 21:22 |
hub_cap | ok we good on ci? | 21:22 |
SlickNik | thanks for actioning, Vipul | 21:22 |
vipul | i think so | 21:22 |
hub_cap | #Backups Discussion | 21:22 |
hub_cap | status first | 21:22 |
vipul | i thnk juice / robertmeyers / SlickNik you're up | 21:23 |
*** cloudchimp has quit IRC | 21:23 | |
robertmyers | backups are good, lets do it | 21:23 |
SlickNik | So we got a sweet merge from robertmyers with his streaming/mysqldump implementation... | 21:23 |
SlickNik | to our shared work in progress repo. | 21:23 |
hub_cap | robertmyers: lol | 21:24 |
robertmyers | we need a good way to run the restore | 21:24 |
*** dhellmann has quit IRC | 21:24 | |
hub_cap | are we trying to get the backup _and_ restore in one fell swoop? | 21:24 |
hub_cap | or are we going to break it up to 2 features? | 21:24 |
SlickNik | I am working on hooking up our innobackupex restore implementation to it. (Testing it out now, really) | 21:24 |
hub_cap | since we havent nailed the api to omuch for the restore etc | 21:24 |
vipul | i think we have hub_cap | 21:25 |
vipul | that's the backupRef peice | 21:25 |
robertmyers | well, we should at least have a plan | 21:25 |
SlickNik | I think esmute has the API/model pieces for both ready to go. | 21:25 |
hub_cap | i agree we need a plan | 21:25 |
hub_cap | oh well then SlickNik if its taht easy | 21:25 |
SlickNik | I'll let esmute comment. | 21:25 |
SlickNik | :) | 21:25 |
vipul | hub_cap: so the backupRef vs backupId discussion is related to restore | 21:25 |
esmute | yup... just need to do some renaming from ref to id | 21:25 |
vipul | where we create a new instance, and porivde the backup ID | 21:25 |
vipul | that's the API | 21:25 |
robertmyers | there may be extra things we need like to reset the password | 21:25 |
SlickNik | But the plan was to check with him and push those pieces up for gerrit review. | 21:26 |
vipul | am i missing something? | 21:26 |
juice | robertmyers: could they do that after the restore? | 21:26 |
esp1 | robertmyers: I was wondering about that. | 21:26 |
robertmyers | we could do it automatically after the restore | 21:26 |
vipul | which password? the root user? os_admin? | 21:27 |
esp1 | should they get the original password by default? | 21:27 |
robertmyers | they may want to use all the same uses/etc | 21:27 |
juice | robertmyers: who would the user get the new password | 21:27 |
robertmyers | I think the root mysql user password will need to be reset | 21:27 |
esp1 | juice: it comes back in the POST response body for create | 21:27 |
vipul | so i thought we'd slide the restore piece into the current 'prepare' workflow which i beleive does that after the data files ar ein place? | 21:28 |
hub_cap | robertmyers: yes it would | 21:28 |
esp1 | or you can do a separate call as robertmyers said | 21:28 |
hub_cap | and it shouldnt be enabled by default since new instances dont come w/ root enabled by default | 21:28 |
esp1 | got it | 21:28 |
hub_cap | <3 | 21:28 |
hub_cap | and the osadmin user/pass might be goofy | 21:29 |
hub_cap | im assuming we are pullin in the user table | 21:29 |
hub_cap | so given that, we will have a user/pass defined for that, as well as a root pass | 21:30 |
robertmyers | that is the plan, a full db backup | 21:30 |
SlickNik | Yeah, we're pullin in the user table as part of restore. | 21:30 |
hub_cap | so we might have to start in safe mode, change the passes, and then restart in regular mode after writing osadmin to the config | 21:30 |
SlickNik | What's the behavior if a db with root enabled is backed up? | 21:31 |
SlickNik | (on restore) | 21:31 |
hub_cap | id say do _not_ enable root | 21:31 |
hub_cap | no matter what | 21:31 |
robertmyers | #agreed | 21:31 |
hub_cap | cuz that needs to be recorded | 21:31 |
hub_cap | and it becoems a grey area for support | 21:31 |
SlickNik | So the restored instance is the same except with root _not_ enabled…? | 21:31 |
hub_cap | correct | 21:32 |
hub_cap | since enable root says "im giving up support" | 21:32 |
vipul | and a different password for os_admin | 21:32 |
SlickNik | gotcha vipul. | 21:33 |
hub_cap | so great work on status team | 21:33 |
hub_cap | now division of labor | 21:33 |
hub_cap | whos doin what | 21:33 |
hub_cap | cuz i aint doin JACK | 21:33 |
SlickNik | Just one clarification. | 21:33 |
hub_cap | wrt backups | 21:33 |
hub_cap | yes SlickNik sup | 21:33 |
SlickNik | So it's fine for them to backup from an instance on which they have given up support and restore to an instance for which they will have support. | 21:34 |
hub_cap | hmm thats a good point of clarification | 21:34 |
vipul | that's tricky.. | 21:34 |
vipul | cuz we don't know what they may have changed | 21:34 |
hub_cap | ok so given that | 21:34 |
hub_cap | lets re-record that root was enabled on it | 21:35 |
hub_cap | and keep the same root pass | 21:35 |
juice | sounds like the right thing to do | 21:35 |
hub_cap | really its the same database | 21:35 |
grapex | imsplitbit: for some backup strategies, doesn't the user info get skipped? | 21:35 |
hub_cap | so the ony pass that should change is os_admin cuz thats ours | 21:35 |
grapex | Maybe I'm thinking back to an earlier conversation, but I remember this came up and the idea was the admin tables wouldn't be touched on restore. | 21:36 |
hub_cap | grapex: i think that was b4 mysqldump righ? | 21:36 |
vipul | grapex: you can choose which tables to include in backup | 21:36 |
grapex | hub_cap: Maybe, it seems like I might be speaking crazy here. | 21:37 |
robertmyers | I think we want to do all and get the users | 21:37 |
hub_cap | i think grapex is alluding to a conversation that was had regarding a internal product here | 21:37 |
grapex | vipul: For iter 1 let's just do the full thing and remember what the root enabled setting was | 21:37 |
hub_cap | yup just make sure to re-record the root enalbed setting w/ the new uuid, and leave teh root pass the same | 21:37 |
grapex | hub_cap: No, it was earlier than that, talking about import / export... n/m. For iter 1 you guys are right and we should record the setting. | 21:37 |
SlickNik | grapex / vipul / hub_cap: I like grapex's iter 1 idea for now. | 21:37 |
hub_cap | update os_admin | 21:37 |
vipul | not sure about mysqldump.. but xtrabackup supports expressions to exclude /incdlue tables | 21:37 |
hub_cap | vipul: same w/ mysqldump | 21:37 |
vipul | so then we could make a call now | 21:38 |
vipul | if we want to start with a fresh set of users each time | 21:38 |
vipul | then we just exclude it now | 21:38 |
hub_cap | naw i dont think so, it doesnt make sense to _not_ include users | 21:38 |
grapex | For imports it might- | 21:38 |
hub_cap | even if so, u still have the root enalbed issue | 21:38 |
grapex | that's down the road though | 21:38 |
robertmyers | well, I think that can be set by the implementation | 21:38 |
vipul | and to get manageability on restore back | 21:38 |
SlickNik | Well, from a users perspective, if I've done a bunch of work setting up new users, I don't want to have to redo that on restore, though... | 21:38 |
vipul | yea nvm | 21:38 |
vipul | you still have other tables | 21:39 |
hub_cap | lets go w/ all or nothing as of now :) | 21:39 |
hub_cap | we are devs, we dont know any better :D | 21:39 |
hub_cap | let some mysql admins tell us we are doing it wrong later | 21:39 |
hub_cap | ;) | 21:39 |
* grapex blows horn to summon imsplitbit | 21:39 | |
vipul | we can add a flag later to support this use case | 21:39 |
robertmyers | well, right now the command is plugable | 21:39 |
hub_cap | #agreed w/ grapex iter 1 | 21:39 |
hub_cap | i say we move on | 21:39 |
robertmyers | so it can be easily changed | 21:39 |
hub_cap | tru robertmyers | 21:40 |
hub_cap | division of labor | 21:40 |
SlickNik | sounds good, thanks for clarification. | 21:40 |
hub_cap | whos doin what | 21:40 |
* robertmyers will work on all the fun parts | 21:40 | |
hub_cap | lol | 21:40 |
SlickNik | haha | 21:40 |
SlickNik | I'm working on the innobackupex pluggable part. | 21:40 |
robertmyers | right now I'm looking at the backup model to see if we can remove logic from the views | 21:41 |
robertmyers | like a test to see if a job is running | 21:42 |
vipul | anyone doing the streaming download? | 21:42 |
juice | robertmyers: what is your intention for work on the restore? | 21:42 |
*** sacharya has joined #openstack-meeting-alt | 21:42 | |
juice | vipul: I think the download is a lot more straightforward than the upload | 21:42 |
robertmyers | well, I was thinking that we create a registry, and look up the restore process from the backup type | 21:43 |
juice | since swift handles the reassembly of the pieces … or at least that is what I read in the documentation | 21:43 |
juice | robertmyers: do we do that or just mirror the configuration that is done for the backup runner? | 21:44 |
robertmyers | juice: yes if you download the manifest it pulls down all | 21:44 |
hub_cap | id say lets think we are doing 1x backup and restore type for now | 21:44 |
SlickNik | robertmyers, do we need a registry? It makes sense to have the restore type coupled with the backup_type, right? I don't see a case where I'd backup using one type, but restore using another... | 21:44 |
hub_cap | correct, for now | 21:44 |
hub_cap | in the future we will possibly have that | 21:44 |
robertmyers | well, since we are storing the type, one might change the setting over time | 21:44 |
SlickNik | hub_cap, that was my thinking for now, at least. | 21:44 |
hub_cap | a user uploads a "backup" of their own db | 21:44 |
hub_cap | robertmyers: i dont think that we need that _now_ tho | 21:45 |
vipul | dont' we already have the use case of 2 types? | 21:45 |
hub_cap | that could happen in teh future, and we will code that when we thik about it happening | 21:45 |
vipul | xtrabackup and mysqldump | 21:45 |
robertmyers | No i'm talking about us changing the default | 21:45 |
grapex | I agree | 21:45 |
grapex | Let's just put in types now. | 21:46 |
hub_cap | vipul: but xtrabackup will have its own restore, and so will mysqldump right? | 21:46 |
hub_cap | grapex: types are in teh db i think arelady | 21:46 |
hub_cap | right? | 21:46 |
vipul | right but you need ot be able to look it up since it's stored in the DB entry | 21:46 |
robertmyers | so we store the backup_type in the db and us that to find the restore method | 21:46 |
hub_cap | so what wil this _restore method_ be | 21:46 |
vipul | it's really a mapping.. 'xtrabackup' -> 'XtraBackupRestorer' | 21:46 |
hub_cap | grab a file from swift and stream it in? | 21:46 |
robertmyers | vipul: yes | 21:46 |
hub_cap | the xtrabackup file cant be streamed back in? | 21:47 |
hub_cap | like a mysqldump file | 21:47 |
robertmyers | well, that part will be the same | 21:47 |
SlickNik | it needs to be streamed to xbstream to decompress it. | 21:47 |
juice | this discussion is do we use configuration to more or less statically choose the restore type or do we use some component that chooses it based off of the backup type? | 21:47 |
robertmyers | but the command to run will be differrent | 21:47 |
SlickNik | But then it has an extra step of running the "prepare" | 21:47 |
hub_cap | im confused | 21:47 |
hub_cap | w/ xtra do u not do, 1) start reading from swift, 2) start streaming to mysql | 21:48 |
juice | download then backup | 21:48 |
hub_cap | like u do w/ mysqldump | 21:48 |
juice | download is the same for either case | 21:48 |
hub_cap | mysql < dumpfile | 21:48 |
vipul | hub_cap: no... you don't pipe it in | 21:48 |
juice | backup process may vary yes? | 21:48 |
hub_cap | thats just terrible | 21:48 |
hub_cap | :) | 21:48 |
vipul | you have to 'prepare' which is a xtrabackup format -> data files | 21:48 |
vipul | then you start mysql | 21:48 |
SlickNik | hub_cap: db consistency isn't guaranteed unless you run prepare for xtrabackup. | 21:49 |
hub_cap | ok lets go w/ what robertmyers said then.... i thoguht they were tehe same | 21:49 |
juice | same download + different restore + same startup | 21:49 |
vipul | is it the same startup? | 21:49 |
vipul | one assumes mysqsl is up and running | 21:49 |
vipul | other assumes it's down.. and started after restore | 21:49 |
SlickNik | Seems like it's different. I think for mysqldump, mysql already needs to be running so it can process the logical dump. | 21:50 |
hub_cap | ok so lets not worry bout the details now that we _know_ its different | 21:50 |
SlickNik | vipul: yeah. | 21:50 |
hub_cap | lets just assume that we need to know how to restore 2 different types, and let robertmyers and co handle it accordingly | 21:50 |
vipul | robertmyers: so where do store the dump? | 21:51 |
vipul | assume that there is enough space in /vda? | 21:51 |
vipul | or actually you stream directly to mysql | 21:51 |
vipul | where xtrabackup streams directly to /var/lib/mysql | 21:51 |
robertmyers | good question, we may need to check for enough space | 21:51 |
robertmyers | we can see if streaming is possible | 21:52 |
vipul | mysql < swift download 'xx'? | 21:52 |
hub_cap | mysqldump shouldnt store the dump i think | 21:52 |
hub_cap | stream FTW | 21:52 |
SlickNik | I think you may be able stream it to mysql directly. | 21:52 |
hub_cap | lets assume yes for now, and if not, solve it | 21:53 |
hub_cap | i _know_ we can for mysqldump | 21:53 |
hub_cap | moving on? | 21:54 |
SlickNik | oh, one other clarification. | 21:54 |
hub_cap | kk | 21:54 |
SlickNik | I'll probably be done looking at the xtrabackup backup piece by today/tom. | 21:55 |
SlickNik | and so juice and I can start looking at restore. | 21:55 |
hub_cap | cool | 21:55 |
hub_cap | so on to notifications | 21:55 |
hub_cap | #topic Notifications Plan | 21:55 |
*** openstack changes topic to "Notifications Plan (Meeting topic: reddwarf)" | 21:55 | |
hub_cap | vipul: batter up! | 21:55 |
SlickNik | cool, thanks. | 21:55 |
robertmyers | I updated the wiki with our notifications | 21:56 |
vipul | SO thanks to robertmyers for filling out hte info in wiki | 21:56 |
vipul | i wanted to see where this is on your radar | 21:56 |
SlickNik | thanks #robertmyers. | 21:56 |
vipul | in terms of pushing up the code | 21:56 |
vipul | otherwise we can start adding it in | 21:56 |
vipul | now that we have a design for what we need to do.. | 21:56 |
vipul | also wanted to see how you emit 'exists' events | 21:56 |
robertmyers | well, we have it all written... so we should make a patch | 21:56 |
grapex | vipul: Describe an "exists" event. | 21:57 |
vipul | do we have a periodic task or something? | 21:57 |
vipul | that goes through and periodically checks every resource in the DB | 21:57 |
grapex | vipul: We do something like that once a day. | 21:57 |
robertmyers | we have a periodic task that runs | 21:57 |
hub_cap | isint that _outside_ of reddwarf? | 21:57 |
robertmyers | yes | 21:58 |
hub_cap | would reddwarf _need_ to emit these? | 21:58 |
grapex | hub_cap: I don't think it should. | 21:58 |
vipul | Well it seems that every 'metering' implementation has exists evetns | 21:58 |
hub_cap | vipul: sure but somethign like cielometer shoudl do that | 21:58 |
vipul | so it seems anyone using reddwarf woul dhave to build one | 21:58 |
hub_cap | notifications are based on events that happen | 21:59 |
grapex | vipul cp16net: What if we put the exist daemon into contrib? | 21:59 |
hub_cap | i personally disagree w/ exists events too, so it may color my opinion :) | 21:59 |
grapex | hub_cap: Honestly, I don't like them at all either. :) | 21:59 |
vipul | contrib sounds fine to me grapex | 21:59 |
hub_cap | i dont think that nova sends exists events vipul | 22:00 |
vipul | it's easy enough to write one.. i just feel that's it's kinda necessary.. | 22:00 |
hub_cap | it sends events based on a status change | 22:00 |
vipul | it might not be part of core.. not sure actually | 22:00 |
grapex | vipul: Ok. We're talking to this goofy external system, but we can find a way to seperate that code. If there's some synergy here I agree we should take advantage of it. | 22:00 |
hub_cap | its necessary for our billing system, but i dont think reddwarf needs to emit them. they are _very_ billing specific | 22:00 |
hub_cap | but im fine w/ it being in contrib | 22:01 |
grapex | So how does Nova emit these events? | 22:01 |
hub_cap | i dont think nova does grapex | 22:01 |
grapex | Or rather, where? In compute for each instance? | 22:01 |
vipul | nova does the same thing as reddwarf using oslo notifications | 22:01 |
grapex | events == notifications roughly | 22:01 |
grapex | to my mind at least | 22:01 |
imsplitbit | sorry guys I got pulled away but I'm back now | 22:01 |
vipul | yep agreed interchangeable | 22:01 |
hub_cap | https://wiki.openstack.org/wiki/SystemUsageData#compute.instance.exists: | 22:01 |
hub_cap | not sure if this is old or not | 22:02 |
grapex | Hmm | 22:02 |
vipul | there is a volume.exists.. so it's possible that there is something periodic | 22:02 |
grapex | vipul: We should probably talk more after the meeting | 22:02 |
hub_cap | if there are exists events _in_ the code, then im ok w/ adding them to our code | 22:02 |
hub_cap | but god i hate them | 22:03 |
vipul | grapex: sure | 22:03 |
grapex | My only concern with combining efforts is if we don't quite match we may end up complicating both public code and both our billing related efforts by adding something that doesn't quite fit. | 22:03 |
vipul | if we keep it similar to what robertmyers published i think it'll be fairly generic | 22:03 |
vipul | and we need them for our billing systme :) | 22:04 |
hub_cap | https://wiki.openstack.org/wiki/NotificationEventExamples#Periodic_Notifications: | 22:04 |
grapex | vipul: OK. He or I will be doing a pull request for notifications stuff very soon | 22:04 |
hub_cap | if nova already does this grapex maybe we are duplicating effort! | 22:04 |
grapex | within the week, hopefully | 22:04 |
vipul | grapex: sweet! | 22:04 |
*** sdake_ has quit IRC | 22:04 | |
hub_cap | but just cuz there is a wiki article doesnt mean its up to date | 22:05 |
SlickNik | Nice. | 22:05 |
vipul | hub_cap: even if nova emitted events, I think reddwarf should be the 'source of truth' in terms of timestamps and whatnot | 22:05 |
imsplitbit | hub_cap: if it's on the internet it's true | 22:05 |
grapex | vipul: Agreed | 22:05 |
hub_cap | vipul: ya i didnt mean use novas notification | 22:05 |
grapex | vipul: Working backwards from Nova to figure out what a "Reddwarf instance" should be could lead to issues... | 22:05 |
SlickNik | intruenet... | 22:05 |
hub_cap | i meant that we might be able to use their code :) | 22:05 |
vipul | yep | 22:05 |
hub_cap | to emit ours | 22:05 |
vipul | right.. ok | 22:05 |
grapex | hub_cap: Sorry for the misinterpretation | 22:05 |
hub_cap | no worries | 22:06 |
vipul | cool.. i think we're good on this one.. | 22:06 |
vipul | let's get the base in.. | 22:06 |
hub_cap | def | 22:06 |
vipul | and can discuss periodic | 22:06 |
vipul | if need be | 22:06 |
hub_cap | #link https://wiki.openstack.org/wiki/NotificationEventExamples | 22:06 |
hub_cap | #link https://wiki.openstack.org/wiki/SystemUsageData#compute.instance.exists: | 22:06 |
hub_cap | just in case | 22:06 |
vipul | awesome thanks | 22:07 |
hub_cap | #action grapex to lead the effort to get the code gerrit'ed | 22:07 |
hub_cap | #dammit that doestn give enough context | 22:07 |
vipul | there should be an #undo | 22:07 |
hub_cap | #action grapex to lead the effort to get the code gerrit'ed for notifications | 22:08 |
hub_cap | lol vipul ya | 22:08 |
SlickNik | or a #reaction | 22:08 |
hub_cap | #RootWrap | 22:08 |
hub_cap | lol | 22:08 |
hub_cap | #topic RootWrap | 22:08 |
*** openstack changes topic to "RootWrap (Meeting topic: reddwarf)" | 22:08 | |
hub_cap | so lets discuss | 22:08 |
vipul | ok this one is around Guest Agent.. | 22:08 |
vipul | where we do 'sudo this' and 'sudo that' | 22:08 |
vipul | turns out we can't really run gueat agent wihtout giving the user sudoers privileg | 22:08 |
hub_cap | fo shiz | 22:09 |
SlickNik | #link https://wiki.openstack.org/wiki/Nova/Rootwrap | 22:09 |
vipul | so we should look at doing the root wrap thing there | 22:09 |
* datsun180b listening | 22:09 | |
hub_cap | yes but we shodl try to get it moved to common if we do that ;) | 22:09 |
hub_cap | rather than copying code | 22:09 |
vipul | I believe it's based on config where you specify everything you can do as root.. and only those things | 22:09 |
datsun180b | sounds about right | 22:09 |
vipul | hub_cap: it's alrady in oslo | 22:09 |
SlickNik | It's already in oslo, I believe. | 22:10 |
SlickNik | We need to move to a newer version of oslo (which might be painful) though | 22:10 |
vipul | datsun180b: I think the challenge will be to define that xml.. with every possible thing we want to be able to do | 22:10 |
vipul | but otherwise probably not too bad | 22:10 |
vipul | so maybe we BP this one | 22:10 |
datsun180b | we've got a little experience doing something similar internally | 22:11 |
hub_cap | vipul: YESSSSS | 22:11 |
SlickNik | I think we should bp it. | 22:11 |
datsun180b | i don't think a BP would hurt one bit | 22:11 |
SlickNik | I hate the fact that our user can sudo indiscriminately... | 22:11 |
vipul | yup makes hardening a bit difficult | 22:11 |
datsun180b | well if we've got sudo installed on the instance what's to stop us from deploying a shaped charge of a sudoers ahead of time | 22:12 |
datsun180b | spitballing here | 22:12 |
datsun180b | aren't there provisos for exactly what commands can and can't be run by someone granted powers | 22:12 |
vipul | you mean configure the sudoers to do exactly that? | 22:13 |
robertmyers | sudoers is pretty flexible :) | 22:13 |
hub_cap | sure but so is rootwrap :) | 22:13 |
datsun180b | right, if we know exactly what user and exactly what commands will be run | 22:13 |
hub_cap | and its "known" and it makes deployments easier | 22:13 |
hub_cap | 1 line in sudoers | 22:13 |
datsun180b | what, rootwrap over sudoers? | 22:13 |
hub_cap | rest in code | 22:13 |
hub_cap | yes rootwrap over sudoers | 22:13 |
vipul | probably should go with the common thing.. | 22:13 |
hub_cap | since thats the way of the openstack | 22:13 |
SlickNik | yeah, but I don't think you can restrict arguments and have other restrictions. | 22:14 |
robertmyers | you can do line line in sudoers too, just a folder location | 22:14 |
*** djohnstone has quit IRC | 22:14 | |
*** sdake_ has joined #openstack-meeting-alt | 22:14 | |
hub_cap | sure u can do all this in sudoers | 22:14 |
hub_cap | and in rootwrap | 22:14 |
hub_cap | and prolly in _insert something here_ | 22:14 |
robertmyers | options! | 22:14 |
hub_cap | but since the openstack community si going w/ rootwrap, we can too | 22:15 |
* datsun180b almost mentioned apparmor | 22:15 | |
juice | hub_cap: I think root_wraps justification is that it is easier to manage than sudoers | 22:15 |
SlickNik | lol@datsun180b | 22:15 |
hub_cap | yes juice and that its controlled in code vs by operations | 22:15 |
hub_cap | https://wiki.openstack.org/wiki/Nova/Rootwrap#Purpose | 22:16 |
vipul | +1 for root wrap | 22:16 |
hub_cap | +1 for rootwrap | 22:17 |
hub_cap | +100 for common things shared between projects | 22:17 |
vipul | So I can start a BP for this one.. hub_cap just need to get us a dummy bp | 22:17 |
SlickNik | +1 for rootwrap | 22:17 |
datsun180b | -1, the first step of rootwrap in that doc is an entry in sudoers! | 22:17 |
datsun180b | i'm not going to win but i'm voting on the record | 22:18 |
SlickNik | Ah, did we run out of dummies already? | 22:18 |
hub_cap | vipul: https://blueprints.launchpad.net/reddwarf/+spec/parappa-the-rootwrappah | 22:18 |
SlickNik | nice name | 22:18 |
grapex | hub_cap: that is the greatest blue print name of all time. | 22:18 |
hub_cap | datsun180b: read the rest of the doc then vote ;) | 22:18 |
hub_cap | :P | 22:18 |
datsun180b | it looks to be about par for nova | 22:18 |
hub_cap | there was _much_ discussion on going to rootwrap 2 version agao | 22:19 |
hub_cap | *ago | 22:19 |
hub_cap | movion on? | 22:19 |
SlickNik | Are we good with rootwrap? | 22:20 |
SlickNik | sounds good. | 22:20 |
vipul | oh yes.. i think we're good | 22:20 |
hub_cap | #topic quota tests w/ xml | 22:20 |
*** openstack changes topic to "quota tests w/ xml (Meeting topic: reddwarf)" | 22:20 | |
datsun180b | yes let's | 22:20 |
hub_cap | grapex: lets chat about that | 22:20 |
grapex | hub_cap: Sure. | 22:20 |
grapex | It looks like the "skip_if_xml" is still called for the quotas test, so that needs to be turned off. | 22:21 |
grapex | https://github.com/stackforge/reddwarf/blob/master/reddwarf/tests/api/instances.py , line 243 | 22:21 |
vipul | i thought we were fixing this a week ago | 22:21 |
vipul | maybe that was for limits | 22:21 |
vipul | esp1: weren't you the one that had a patch? | 22:22 |
grapex | vipul: Quotas was fixed, but this was still turned off. I was gone at the time when the test fixes were merged, so I didn't see this until recently... sorry. | 22:22 |
esp1 | vipul: yeah | 22:23 |
esp1 | I can retest it if you like. | 22:23 |
*** sdake_ has quit IRC | 22:23 | |
esp1 | but I'm pretty sure they were working last week. | 22:23 |
hub_cap | is the flag still in the code? | 22:24 |
grapex | esp1: The issue is in the second test run - right now if "skip_with_xml" is still called, the tests get skipped in XML mode. That one function needs to be removed. | 22:24 |
vipul | esp1: https://github.com/stackforge/reddwarf/blob/master/reddwarf/tests/api/instances.py#L243 | 22:24 |
hub_cap | ps grapex, if u do github/path/to/file.py#LXXX it will take u there | 22:24 |
hub_cap | grapex: like that :P | 22:24 |
grapex | hub_cap: Thanks for the tip... good ole github | 22:24 |
esp1 | grapex: ah ok. | 22:24 |
datsun180b | easy enough to reenable the tests, the hard part is making sure we still get all-green afterward | 22:25 |
esp1 | I think I saw a separate bug logged for xml support in quotas | 22:25 |
grapex | esp1: Sorry, I thought I'd made a bug or blueprint or something for this explaining it but I can't find it now... *sigh* | 22:25 |
grapex | esp1: Maybe that was it | 22:25 |
esp1 | grapex: I think you did. I can pull it up and put it on my todo list | 22:25 |
grapex | So in general, any new feature should work with JSON and XML out of the gate... the skip thing was a temporary thing to keep the tests from failing. | 22:25 |
hub_cap | +1billion | 22:25 |
grapex | esp1: Cool. | 22:26 |
SlickNik | I agree. +1 | 22:26 |
esp1 | np | 22:26 |
esp1 | #link https://bugs.launchpad.net/reddwarf/+bug/1150903 | 22:26 |
grapex | esp1: One more tiny issue | 22:26 |
esp1 | yep, | 22:26 |
grapex | esp1: that test needs to not be in the "GROUP_START", since other tests depend on that group but may not need quotas to work. | 22:27 |
grapex | Oh awesome, thanks for finding that. | 22:27 |
esp1 | grapex: ah ok. yeah I remember you or esmute talking about it. | 22:27 |
hub_cap | grapex: is there a doc'd bug for that? | 22:27 |
esp1 | I'll take care of that bug too. (maybe needs to be logged first) | 22:28 |
vipul | #action esp1 to re-enable quota tests w/XML support and remove them from GROUP_START | 22:28 |
hub_cap | perfect, we good on that issue? | 22:28 |
grapex | hub_cap: Looks like it. | 22:28 |
datsun180b | sounds good | 22:28 |
vipul | and delete the 'skip_if_xml' method :) | 22:28 |
vipul | all together | 22:29 |
esp1 | right | 22:29 |
SlickNik | Sounds good | 22:29 |
esp1 | sure why not. | 22:29 |
SlickNik | Thanks esp1 | 22:29 |
hub_cap | baby in ihandss | 22:29 |
hub_cap | sry | 22:29 |
esmute | what is the xml support? | 22:29 |
hub_cap | #topic Actions / Action Events | 22:29 |
*** openstack changes topic to "Actions / Action Events (Meeting topic: reddwarf)" | 22:29 | |
SlickNik | does it improve your spelling? :) | 22:29 |
hub_cap | lol no | 22:30 |
hub_cap | its terrible either way | 22:30 |
vipul | esmute: the python client can do both json and xml.. we run tests twice, once with xml turned on and once without | 22:30 |
hub_cap | so i thoguth of 3 possible ways to do actions and action_events | 22:30 |
esp1 | esmute: we support both JSON and XML in the Web Service API | 22:30 |
hub_cap | 1) pass a async response uuid back to asyn events and poll based on that (our dnsaas does this for some events) | 22:31 |
hub_cap | lemme find the email and paste it | 22:31 |
esmute | thanks vipul, esp1... is the conversion happening in the client? | 22:31 |
hub_cap | 1) Async callbacks - a la DNS. Send back a callback uuid that a user can query against a common interface. This is more useful for things that do not return an id, such as creating a database or a user. See [1] for more info. For items that have a uuid, it would make more sense to just use that uuid. | 22:31 |
*** amyt has quit IRC | 22:31 | |
hub_cap | 2) HEAD /whatever/resource/id to get the status of that object. This is like the old cloud servers call that would tell u what state your instance was whilst building. | 22:31 |
hub_cap | 3) NO special calls. Just provide feedback on the GET calls for a given resource. This would work for both items with a uuid, and items without (cuz a instance has a uuid and u can append a username or dbname to it). | 22:31 |
hub_cap | [1] http://docs.rackspace.com/cdns/api/v1.0/cdns-devguide/content/sync_asynch_responses.html | 22:31 |
*** robertmyers has quit IRC | 22:31 | |
esp1 | esmute: sorta I'll walk you through it tomorrow :) | 22:31 |
esmute | cool | 22:32 |
hub_cap | i think that #3 was the best option for uniformity | 22:32 |
hub_cap | does anyone feel same/different on that? | 22:32 |
vipul | wait so this is how does the user determine the status of an action (like instance creation state)? | 22:32 |
vipul | 3 is what we do right | 22:33 |
vipul | today | 22:33 |
hub_cap | correct | 22:33 |
hub_cap | but it gives no status | 22:33 |
hub_cap | err | 22:33 |
hub_cap | it gives no description | 22:33 |
hub_cap | or failure info | 22:33 |
grapex | hub_cap: Do you mean uniformity between other OS apis? | 22:33 |
hub_cap | grapex: uniforimity to what nova does | 22:33 |
hub_cap | and uniformity as in, itll work for actions that dont have uuids (users/dbs) | 22:33 |
grapex | hub_cap: I think we should go for #1. I know it isn't the same as nova but I think it would really help if we could query actions like that. | 22:34 |
grapex | Eventually some other project will come up with a similar idea. | 22:34 |
hub_cap | my thought is to start w/ #3 | 22:34 |
hub_cap | since itll be the least work and itll provide value | 22:34 |
vipul | in #1, the user is providing a callback (url or something)? | 22:34 |
SlickNik | So, if I understand correctly 3 is to extend the GET APIs that we have today to also provide the action description. | 22:34 |
grapex | hub_cap: That makes sense, as long as #1 is eventually possible | 22:34 |
hub_cap | essentially #3 is #1 | 22:34 |
hub_cap | but w/ less data | 22:35 |
hub_cap | thats why i was leaning toward #3 | 22:35 |
grapex | Yeah, sorry... if we have unique action IDs in the db we can eventually add that and figure out how the API should look | 22:35 |
vipul | #1 seems more like a PUSH model.. where reddwarf notifies | 22:35 |
hub_cap | the only reason u ened a callback url in dns aas is cuz they dont control the ID | 22:35 |
hub_cap | well they are all polling honestly but i thikn i see yer point vipul | 22:35 |
hub_cap | i honestly dislike the "callback" support | 22:36 |
hub_cap | becasue whats the diff between these scenarios | 22:36 |
hub_cap | 1) create instance, get a uuid for the instance | 22:36 |
hub_cap | crap let me start over | 22:36 |
hub_cap | 1) create instance, get a uuid for the instance, poll GET /instance/uuid for status | 22:37 |
hub_cap | 2) create instance, get new callback uuid and uuid for instance, poll GET /actions/callback_uuid for status | 22:37 |
hub_cap | otehr thatn 2 is more work :P | 22:37 |
vipul | that's not very clean | 22:37 |
vipul | if you're going to do 2) then we should be pushing to them.. invoking the callback | 22:37 |
*** heckj has quit IRC | 22:38 | |
hub_cap | ya and we wont be doing that anytime soon :) | 22:38 |
hub_cap | all in favor for the "Easy" route, #3 above? | 22:38 |
vipul | I | 22:38 |
vipul | Aye | 22:38 |
grapex | I'm sorry, I'm confused. | 22:38 |
hub_cap | eye | 22:38 |
vipul | eye | 22:38 |
hub_cap | lol vipul | 22:38 |
hub_cap | grapex: gohead | 22:38 |
grapex | #2 - you just mean the user would need to poll to get the status? | 22:38 |
hub_cap | correct just like dns | 22:39 |
grapex | How would 1 and 2 map to things like instance resizes? | 22:39 |
hub_cap | http://docs.rackspace.com/cdns/api/v1.0/cdns-devguide/content/sync_asynch_responses.html | 22:39 |
hub_cap | GET /instance/uuid vs GET /instance/callback_uuid_u_got_from_the_resize | 22:40 |
hub_cap | GET instance/uuid alreayd says its in resize | 22:40 |
hub_cap | this will jsut give u more info if something goes wrong | 22:40 |
vipul | really the diffence is GET /resourceID vs GET /JobID | 22:40 |
hub_cap | which was the original point of actions in the first place | 22:40 |
SlickNik | Honestly the only reason I'd consider 2 would be if there were actions that are mapped to things other than resources (or across multiple resource). | 22:40 |
grapex | SlickNik: That's my concern too. | 22:40 |
hub_cap | we cross that bridge when we come to it | 22:41 |
hub_cap | ^ ^ my favorite phrase :) | 22:41 |
grapex | Ok- as long as we can start with things as they are today | 22:41 |
vipul | do we want to consider everything a Job? | 22:41 |
grapex | and each action has its own unique ID | 22:41 |
hub_cap | grapex: it does/will | 22:41 |
grapex | Or maybe a "task"? | 22:41 |
grapex | That our taskmanager can "manage"? :) | 22:41 |
vipul | heh | 22:41 |
hub_cap | lol grapex | 22:42 |
SlickNik | heh | 22:42 |
grapex | Actually live up to it's name finally instead of being something we should've named "reddwarf-api-thread-2" | 22:42 |
hub_cap | ehe | 22:42 |
vipul | that might have been the intention.. but we don't do a whole lot of managing task states | 22:42 |
hub_cap | reddwarf-handle-async-actions-so-the-api-can-return-data | 22:42 |
vipul | we record a task id i believe.. but that's it | 22:42 |
grapex | vipul: Yeah its pretty silly. | 22:42 |
grapex | Well I'm up for calling it action or job or task or whatever. | 22:43 |
grapex | hub_cap: Nova calls it action already, right? | 22:43 |
hub_cap | nova calls it instance_action | 22:43 |
hub_cap | cuz it only applies to instances | 22:43 |
vipul | gross | 22:43 |
grapex | Ah, while this would be for anything. | 22:43 |
hub_cap | im calling it action cuz its _any_ action | 22:43 |
hub_cap | likely for things like create user | 22:44 |
grapex | task would kind of make sense. But I'm game for any name. | 22:44 |
hub_cap | ill do instance uuid - username | 22:44 |
hub_cap | as a unique id for it | 22:44 |
vipul | backup id | 22:44 |
hub_cap | lets call it poopoohead then | 22:44 |
hub_cap | and poopoohead_actions | 22:44 |
SlickNik | lol! | 22:44 |
juice | hub_cap: hope that's not inspired by holding a baby in your hands | 22:44 |
grapex | hub_cap: I think we can agree on that. | 22:44 |
juice | sounds like a mess | 22:44 |
hub_cap | HAHA | 22:44 |
hub_cap | nice yall | 22:45 |
vipul | hub_cap does it only apply to async things? | 22:45 |
hub_cap | likely | 22:45 |
hub_cap | since sync things will return a error if it happens | 22:45 |
hub_cap | but it can still record sync things if we even have any of those | 22:45 |
hub_cap | that arent GET calls | 22:45 |
hub_cap | basically anything that mods a resource | 22:45 |
grapex | So I'm down for going route #3, which if I understand it means we really won't change the API at all but just add this stuff underneath | 22:46 |
vipul | i'm assuming we add a 'statusDetail' to the response of every API? | 22:46 |
hub_cap | to the GET calls vipul likely | 22:46 |
grapex | because it seems like this gets us really close to tracking, which probably everyone is salivating for, and we may want to change the internal DB schema a bit over time before we offer an API for it. | 22:47 |
SlickNik | only the async GETs, I thought. | 22:47 |
hub_cap | def grapex | 22:47 |
grapex | So instance get has a "statusDetail" as well? | 22:47 |
hub_cap | likely all GET's will have a status/detail | 22:48 |
grapex | So there's status and then "statusDetail"? That implies a one to one mapping with resources and actions. | 22:48 |
hub_cap | maybe not if it is not a failure | 22:48 |
grapex | Assuming "statusDetail" comes from the action info in the db. | 22:48 |
vipul | that's my understanding as well grapex | 22:48 |
hub_cap | it implies a 1x1 mapping between a resource and its present state | 22:48 |
hub_cap | it wont tel u last month your resize failed | 22:49 |
hub_cap | itll tell you your last resize failed if its in failure state | 22:49 |
grapex | hub_cap: But that data will still be in the db, right? | 22:49 |
hub_cap | fo shiiiiiiiz | 22:49 |
SlickNik | Would a flavor GET need a status/detail? | 22:49 |
vipul | prolly not since that would be static data | 22:49 |
hub_cap | correct | 22:49 |
grapex | Ok. Honestly I'm ok, although I think statusDetail might look a little gross. | 22:50 |
grapex | For instance | 22:50 |
grapex | if a resize fails | 22:50 |
grapex | today it goes to ACTIVE status and gets the old flavor id again. | 22:50 |
grapex | So in that case, would "statusDetail" be something like "resize request rejected by Nova!" or something? | 22:50 |
grapex | Because that sounds more like a "lastActionStatus" or something similar. "statusDetail" implies its currently in that status rather than being historical. | 22:51 |
hub_cap | that is correct | 22:51 |
vipul | i guess that could get itneresting if you have two requests against a single resource | 22:51 |
hub_cap | let me look @ how servers accomplishes this | 22:51 |
vipul | you may lose the action you care about | 22:51 |
* hub_cap puts a cap on meeting | 22:51 | |
hub_cap | lets discuss this on irc tomorrow | 22:51 |
hub_cap | its almost 6pm in tx | 22:52 |
hub_cap | and i have to go to the bakery to get some break b4 it closes | 22:52 |
hub_cap | i will say that there is more thinking that needs to go into this bp | 22:52 |
SlickNik | Sounds good. Want to think about this a bit more as well. | 22:52 |
hub_cap | and ill add to the bp( which is lackig now) | 22:52 |
hub_cap | SlickNik: agreed | 22:52 |
grapex | Maybe we should discuss moving the meeting an hour earlier | 22:52 |
vipul | yup good talk | 22:53 |
datsun180b | not a bad idea | 22:53 |
grapex | A few people had to go home halfway through today since it's been raining hard here | 22:53 |
vipul | a little rain gets in the way? | 22:53 |
vipul | i'm game for 1pm PST start | 22:53 |
grapex | Which in Texas happens so rarely it can be an emergency | 22:53 |
SlickNik | I'd be up with that, too | 22:53 |
datsun180b | this is austin, rain is a rarer sight than UFOs | 22:53 |
hub_cap | vipul: lol its tx | 22:53 |
hub_cap | rain scares texans | 22:53 |
SlickNik | It's probably like snow in Seattle. :) | 22:53 |
juice | or sun | 22:53 |
vipul | or sun | 22:53 |
vipul | damn | 22:54 |
vipul | you beat me to it | 22:54 |
SlickNik | nice | 22:54 |
hub_cap | HAHA | 22:54 |
SlickNik | same time | 22:54 |
vipul | this room always seems empty before us | 22:54 |
vipul | so let's make it happen for next week | 22:54 |
hub_cap | LOL we are the only people who use it ;) | 22:54 |
vipul | grapex: we need to talk about the prezo | 22:54 |
grapex | vipul: One person on the team said they needed to go home to roll up the windows on their other car which had been down for the past five years. | 22:54 |
vipul | lol | 22:55 |
hub_cap | grapex: hahahaa | 22:55 |
SlickNik | lolol | 22:55 |
hub_cap | so end meeting? | 22:55 |
datsun180b | so then next week, meeting moved to 3pm CDT/1pm PDT? | 22:55 |
grapex | Real quick | 22:55 |
grapex | We're all cool if hub_cap goes forward on actions right? | 22:55 |
grapex | iter 1 is just db work | 22:55 |
grapex | and we can raise issues during the pull request if there are any | 22:56 |
hub_cap | yup grapex thats what im gonna push the first iter | 22:56 |
vipul | Yea, let's do it | 22:56 |
grapex | Cool. | 22:56 |
SlickNik | I'm fine with that. | 22:56 |
SlickNik | +1 | 22:56 |
grapex | I'm looking forward to it. :) | 22:56 |
SlickNik | Sweetness. | 22:56 |
hub_cap | aight then | 22:56 |
hub_cap | #endmeeting | 22:56 |
*** openstack changes topic to "OpenStack meetings (alternate) || Development in #openstack-dev || Help in #openstack" | 22:56 | |
openstack | Meeting ended Tue Apr 2 22:56:56 2013 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 22:56 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/reddwarf/2013/reddwarf.2013-04-02-20.59.html | 22:56 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/reddwarf/2013/reddwarf.2013-04-02-20.59.txt | 22:57 |
openstack | Log: http://eavesdrop.openstack.org/meetings/reddwarf/2013/reddwarf.2013-04-02-20.59.log.html | 22:57 |
SlickNik | Thanks all... | 22:57 |
esp1 | phew! | 22:57 |
hub_cap | lol | 22:57 |
hub_cap | l8r | 22:57 |
SlickNik | go get yer bread hub_cap… | 22:57 |
SlickNik | laters :) | 22:57 |
grapex | See you guys! | 22:57 |
hub_cap | i know! i gotta get it!!!! | 22:57 |
*** hub_cap has left #openstack-meeting-alt | 22:57 | |
*** esp1 has left #openstack-meeting-alt | 22:57 | |
*** vipul is now known as vipul|away | 22:59 | |
*** saurabhs has left #openstack-meeting-alt | 23:03 | |
*** jcru has quit IRC | 23:04 | |
*** vipul|away is now known as vipul | 23:04 | |
*** vipul is now known as vipul|away | 23:05 | |
*** sdake_ has joined #openstack-meeting-alt | 23:21 | |
*** dhellmann has joined #openstack-meeting-alt | 23:29 | |
*** vipul|away is now known as vipul | 23:52 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!