21:59:43 <hub_cap> #startmeeting reddwarf
21:59:44 <openstack> Meeting started Tue Feb 26 21:59:43 2013 UTC.  The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:59:45 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:59:47 <openstack> The meeting name has been set to 'reddwarf'
22:00:19 <vipul> yo
22:00:23 <vipul> stragglers
22:00:23 <kmansel> howdie
22:00:29 <djohnstone> hey
22:00:52 <hub_cap> #link http://eavesdrop.openstack.org/meetings/reddwarf/2013/reddwarf.2013-02-19-22.00.html
22:01:03 <hub_cap> yup lets give a sec for stragglahz
22:01:09 <SlickNik> w00t.
22:01:28 <hub_cap> i gotta say, this project is much better in part due to these meetings
22:01:41 <robertmyers> hello
22:01:56 <vipul> now if we could only decrease the silence at times in #reddwarf :)
22:02:01 <SlickNik> Something to look forward to on Tuesdays...
22:02:13 <hub_cap> SlickNik: tuesday is such a fun day now!
22:02:20 <hub_cap> #topic Update to Action items
22:02:32 <hub_cap> Lets go w/ SlickNik and stevedore
22:02:33 <SlickNik> I'm up first.
22:02:34 <juice> present
22:02:38 <hub_cap> lol juice
22:02:48 <kagan> past
22:02:53 <hub_cap> we did roll call already yer late juice
22:03:00 <juice> shiat
22:03:13 <juice> down to attendance office I go
22:03:18 <SlickNik> I looked into it, looks good.
22:03:48 <hub_cap> cool. we might want to submit a bp to move toward using that for extensions so we can add/remove at will
22:03:53 <SlickNik> Started working on using the current extension framework for Security Groups for now so I can get that done, will look into porting to stevedore after I'm done with coding it up.
22:03:59 <hub_cap> perfect
22:04:15 <hub_cap> hows teh vmgate w/ the client coming SlickNik?
22:04:30 <SlickNik> Can we schedule some time to talk about it after action items.
22:04:39 <SlickNik> There's a bit to discuss on that.
22:04:40 <vipul> Yea, that one hit a snag
22:04:49 <hub_cap> sure! edit the wiki SlickNik
22:04:59 <SlickNik> Okay, I'll put that in there.
22:05:03 <SlickNik> Let's keep going.
22:05:05 <hub_cap> lets move to kagan and the heartbeat
22:05:12 <hub_cap> i saw the 'pass' in the code for that
22:05:21 <hub_cap> is that the _only_ way to fix it?
22:05:34 <kagan> what do you mean?
22:05:42 <hub_cap> well i saw 2 things in the review
22:05:46 <kagan> i think the pass was somewhere else ...
22:05:54 <hub_cap> i think the resize || shutdown is a-ok
22:06:02 <kagan> i've added the "option" to go through shutdown as well
22:06:07 <vipul> yea the fix was to change the test
22:06:23 <hub_cap> ok cool
22:06:31 <kagan> so, if the flow passes through shutdown it's good, and it's good if it doesn't as long as it reaches up at the end
22:06:36 <kagan> or running or whatever ...
22:06:54 <kagan> i've got all unit test running well now
22:07:01 <hub_cap> nice!
22:07:02 <kagan> still not checked it. need to tidy a bit
22:07:10 <hub_cap> kagan: have u made sure the old image still works w/ that new flow?
22:07:18 <kagan> i have before
22:07:23 <kagan> will do so again today
22:07:25 <kagan> to be sure !
22:07:26 <hub_cap> sweet
22:07:30 <hub_cap> sure is good :)
22:07:42 <kagan> we still have some issues
22:07:44 <hub_cap> also i left some comments on your review to look @
22:07:48 <kagan> i don't want to hold the checkin for it
22:07:51 <hub_cap> sure
22:08:04 <hub_cap> so my turn
22:08:13 <hub_cap> ive looked @ constant snaps. it llooks good
22:08:17 <hub_cap> the only thing i didnt like was the status
22:08:25 <vipul> status detail?
22:08:28 <hub_cap> and ive put a item to talk about it on the agenda today
22:08:29 <hub_cap> ya vipul
22:08:37 <vipul> got rid of that
22:08:40 <hub_cap> ya
22:08:49 <hub_cap> i meant in genral that was my only gripe from last wk
22:08:55 <hub_cap> rest looks good
22:09:01 <vipul> k, let's discuss it during the topic
22:09:10 <hub_cap> ive got a nice way to get all our statii in reddwarf
22:09:14 <hub_cap> u like how i did that? :P
22:09:25 <vipul> heh
22:09:41 <hub_cap> on to u vipul
22:09:54 <vipul> i added a section ot the wiki design for snapshots
22:10:03 <vipul> bascially called out how we're going ot store to swift, and require the role
22:10:14 <hub_cap> nice
22:10:16 <vipul> shoot forgot about handling deletes out of band
22:10:33 <vipul> #action vipul to update snapshots wiki to call our deletes of snapshots from swift
22:11:01 <vipul> we can go crazy later i htink with the ACL stuff
22:11:02 <hub_cap> ive filed a BP for the next action item (https://blueprints.launchpad.net/reddwarf/+spec/instance-actions) it sux right now but ill fill it out
22:11:04 <vipul> i might consider removing that
22:11:07 <hub_cap> okey
22:11:26 <vipul> what is this about
22:11:38 <hub_cap> and we can chat more about that BP when its up
22:11:46 <vipul> ok
22:11:47 <hub_cap> detailed status messages for async operations vipul
22:12:14 <vipul> nice
22:12:17 <hub_cap> vipul: u said u removed the statusDetail
22:12:22 <hub_cap> so we can breeze over that
22:12:29 <vipul> From the API, yes.. we thought about not doing it now
22:12:32 <hub_cap> and ive got a good way to handle them in generically
22:12:36 <hub_cap> in a generic fashipn
22:12:37 <vipul> cool
22:12:39 <hub_cap> correct vipul
22:12:49 <hub_cap> jcru is out today
22:13:00 <SlickNik> #action SlickNik working on Secgroups as extension, will look at porting to stevedore after done with that.
22:13:01 <hub_cap> his sis had a baby!!
22:13:04 <SlickNik> missed that earlier.
22:13:23 <hub_cap> im not sure hes had time to work on the monster
22:13:23 <vipul> what, he can't work on FSM at the same time?
22:13:32 <hub_cap> LOL ill send him the message for u vipul :P
22:13:56 <cp16net> hah
22:14:08 <hub_cap> okey. done w/ the AI's
22:14:23 <hub_cap> lets talk quotas/limits
22:14:25 <hub_cap> #topic Quotas / Limits Updates
22:14:48 <vipul> esp1?
22:14:53 <Esmute> Well.. i just submitted a patch about 5 mins ago
22:14:53 <esp1> yep
22:14:54 <SlickNik> esmute / esp1?
22:15:05 <Esmute> this new patch has the review incorporated by robertmyers
22:15:06 <vipul> Esmute, is this the last one?
22:15:10 <Esmute> hopefully
22:15:33 <robertmyers> Esmute: looking at it now, looks good so far
22:15:33 <vipul> awesome, let's try to get it merged?...
22:15:45 <Esmute> i just need to sweet-talk the rax guys
22:15:51 <esp1> so with any luck after quota's gets  merged I will rebase
22:16:03 <esp1> and tie the quota's stuff into the api call
22:16:25 <vipul> Yea, it could go in as two patches if it slows things down
22:16:41 <vipul> if we don't it merged today, we can add the 'usage' stuff later no?
22:17:18 <esp1> sure
22:17:18 <hub_cap> i think im gonna still be a stickler on the f.__name__ thing, but we can discuss later
22:17:56 <hub_cap> i do want to say, awesome work by both of you, im looking forward to having these in reddwarf
22:18:02 <Esmute> ok lets discuss after the meeting
22:18:03 <hub_cap> #kudos
22:18:07 <hub_cap> def Esmute
22:18:07 <SlickNik> hear hear...
22:18:23 <hub_cap> this might not be on our agenda next week :D :D
22:18:33 <esp1> thx, hard work is done by Esmute
22:18:42 <hub_cap> as well as the next one!! (are we all done w/ this topic?)
22:18:55 <Esmute> thanks guys.. got a lot of helps from the guys here too... and thanks for the reviews too
22:19:07 <hub_cap> np! we <3 reviewing code!!
22:19:16 <SlickNik> Sounds like it. What's up next?
22:19:36 <hub_cap> #topic Percona Image Updates
22:19:42 <hub_cap> kagan: go!
22:19:56 <kagan> i thought we did that before ...
22:20:07 <kagan> so what's missing for the update?
22:20:21 <vipul> https://review.openstack.org/#/c/21557/
22:20:27 <vipul> https://review.openstack.org/#/c/21261/
22:20:31 <kagan> the first "official" commit will be happening today
22:20:42 <kagan> with all unit test passing and int tests passing on mysql and percona
22:20:46 <hub_cap> kagan: not sure if there is anythign left to discuss honestly
22:20:46 <kagan> however ...
22:21:00 <SlickNik> #link https://review.openstack.org/#/c/21557/
22:21:02 <hub_cap> the plot thickens
22:21:04 <SlickNik> #link https://review.openstack.org/#/c/21261/
22:21:05 <kagan> there will be a list of things i'll make that i think we'd like to address in a second pass
22:21:12 <vipul> hub_cap, cp16net: can you guys pull those in, and run int-tests with --percona?
22:21:19 <hub_cap> vipul: ya def
22:21:28 <vipul> just get a 2nd pair of eyes
22:21:29 <hub_cap> id prefer to _not_ have a flag like that tho
22:21:33 <kagan> wouldn't you all prefer to wait for toddy's drop?
22:21:37 <hub_cap> we can discuss that now we are doin good on time
22:21:39 <cp16net> vipul: sure
22:21:41 <kagan> about the flag
22:21:43 <hub_cap> we already have a kick-start mysql
22:21:52 <hub_cap> cant we just do a kick-start mysql-percona
22:21:52 <kagan> we started with having mysql-percona
22:21:59 <kagan> and then moved to mysql --percona
22:22:07 <hub_cap> was there issues w/ that first approach?
22:22:20 <kagan> i'm not so familiar with the kick-start. what is it?
22:22:24 <vipul> the issue is we want the 'type' to be 'mysql' since that's what the API / int tests all invoke
22:22:26 <kagan> well, a bit
22:22:31 <vipul> when uploaded to glance / service_imags
22:22:32 <hub_cap> kick-start is just a wrapper for a few things kagan
22:22:33 <cp16net> it just runs through the setup
22:22:36 <kagan> especially since "what vipul just said"
22:22:42 <hub_cap> vipul: ya it wa slooking for "mysql" right?
22:22:45 <hub_cap> *was
22:22:54 <SlickNik> kagan: it is a wrapper that just runs a few actions in sequence...
22:22:58 <hub_cap> thats the 'type' in the code
22:23:03 <cp16net> instead of running build/build-image/intialize
22:23:12 <kagan> i see
22:23:13 <cp16net> its just a shortcut if you will
22:23:13 <hub_cap> when we move to different mysql versions, the type will be changing from mysql to mysql-percona as per the blueprint
22:23:13 <vipul> right, so either we have to extract the 'mysql' portion of 'percona-mysql' prior to upload
22:23:28 <hub_cap> sure thats just for the default vipul
22:23:30 <kagan> then it should also work still, just pass the extra option
22:23:33 <vipul> Yea, that's what i was telling kagan, that we could leave it as-is now..
22:23:45 <hub_cap> technically u can already pass type: percona-mysql in the create
22:23:49 <vipul> and when the types are implemented that's when we add percona as a separet thing
22:24:01 <kagan> i didn't check cause i didn't know how but i think i've modified the place in code where the kickstart would pass the extra option
22:24:11 <hub_cap> https://github.com/stackforge/reddwarf/blob/master/reddwarf/instance/service.py#L176
22:24:29 <hub_cap> if u pass service_type: percona-mysql (or whatever u call it) itll find that glance image
22:24:29 <vipul> right, and i think event eh reddwarfclient hardcodes it
22:24:35 <hub_cap> ok thats a fail
22:24:38 <hub_cap> but we can fix it
22:24:45 <kagan> for now, if we add a new type - mysql-percona - we'd still need to strip it before uploading image to glance
22:24:49 <hub_cap> id prefer to see us use this as a different "type"
22:24:55 <vipul> right, so for now, we stick it calling it 'mysql' and when we support multiple types we break it out
22:24:58 <hub_cap> why would we need to strip it?
22:25:09 <vipul> becuase of all the touch points
22:25:12 <kagan> so things doesn't break later
22:25:43 <kagan> since this area is about to be modified anyway, why not leave it as is now and modify once we actually support multiple types?
22:25:45 <hub_cap> the glance image is _only_ used during the create
22:25:58 <kagan> but it also registered the type of service there
22:26:04 <hub_cap> well it doesnt need to be modified currently
22:26:07 <hub_cap> its just a fialure of the client
22:26:09 <hub_cap> *Failure
22:26:24 <vipul> and modifications to the tests?
22:26:43 <kagan> why not to leave it as is for now?
22:26:44 <vipul> no way to test the percona version
22:26:50 <hub_cap> we can set the default to a flaggable external
22:27:04 <kagan> again, what's wrong with how it's now?
22:27:13 <hub_cap> kagan: its not using the system as it should be
22:27:18 <kagan> why?
22:27:24 <hub_cap> reddwarf is built to use different service_types
22:27:31 <hub_cap> mysql-percona is a different service type
22:27:49 <hub_cap> basically u can (wiht a bit of modification to reddwarf and the client) make it so it already works for multiple types
22:27:59 <kagan> i thought it was supposed to be the same type of service - dbaas
22:28:00 <hub_cap> instead of hacking it to suit the reddwarf codebase
22:28:13 <hub_cap> id rather see us fix reddwarf than hack around it, make sense?
22:28:15 <vipul> yea, it might be a simple enough change..
22:28:19 <hub_cap> vipul: it def would be
22:28:25 <hub_cap> itll be a flag in teh test conf
22:28:33 <hub_cap> and we can pass in mysql or mysql-percona depending on what we want to test
22:28:39 <hub_cap> and fix the client of course!
22:28:49 <vipul> yea
22:28:56 <hub_cap> so then we wont have to mod redstack at all and we will have it already working w/ multiple types
22:28:58 <vipul> kagan, i can help  you work through those
22:29:12 <hub_cap> so itll be getting us closer to accomplishing https://wiki.openstack.org/wiki/Reddwarf-versions-types
22:29:16 <kagan> ok. so maybe it won't all be checked in today ...
22:29:19 <hub_cap> rather than going around it
22:29:29 <vipul> yea, i wanted to push this off until that BP
22:29:48 <vipul> Since currently we only really support one service type
22:29:52 <kagan> i just need a decision
22:29:52 <hub_cap> ya but is it really necesary? if its already implemented  99%?
22:30:05 <hub_cap> well the tests only support 1 type
22:30:09 <hub_cap> cuz we hardcoded it
22:30:13 <SlickNik> Question, is it possible to run integration tests with different service types? (I think Vipul may have hinted at this earlier)
22:30:18 <SlickNik> If not, it's probably something we should bug and fix...
22:30:19 <vipul> not yet
22:30:20 <hub_cap> but hell we only supported one apt version of mysql before kagan fixed it
22:30:29 <SlickNik> that's true.
22:30:43 <vipul> ok let's do it.. it's only a couple of things we'd have to change
22:30:53 <hub_cap> vipul: lets take like ~30 min to look @ it
22:30:57 <hub_cap> if its like 3 days mreo work we will can it for now
22:31:06 <hub_cap> i dont want to give more work for the sake of giving more work
22:31:07 <vipul> works for me
22:31:10 <hub_cap> but i do want it to work cleanly
22:31:18 <hub_cap> <3
22:31:22 <kagan> so bottom line is?
22:31:33 <kagan> that we'll have the bottom line later?
22:31:42 <esp1> lol
22:31:43 <hub_cap> vipul and i will look @ it for a bit and we will get back to u kagan
22:31:45 <vipul> kagan: hub_cap and I will look at what's required and make a call later
22:31:51 <hub_cap> my inkkling is that its not a hard change
22:32:02 <kagan> ok
22:32:17 <kagan> let me know if you want me in in that discussion.
22:32:25 <hub_cap> kagan: of course we do!!!!
22:32:30 <kagan> might save time for vipul to bring me up to speed later …  ;)
22:32:44 <vipul> yep, let's start looking at the code right after this meeting
22:32:50 <kagan> ok
22:32:53 <hub_cap> #action vipul hub_cap and kagan to look into making the service_type code work properly w/ percona image
22:32:55 <vipul> we cna talk on #reddwarf afterwards
22:32:57 <hub_cap> def vipul
22:32:58 <kagan> anything else on percona stuff ?
22:33:12 <hub_cap> nope were good thx guys
22:33:14 <hub_cap> #topic Instance Actions
22:33:26 <hub_cap> ok so this sucker is in nova today
22:33:38 <hub_cap> #link https://github.com/openstack/nova/commit/250230b32364b1e36ae6d62ec4bb8c3285c59401
22:33:41 <SlickNik> What is this?
22:33:55 <hub_cap> its a tracker for calls essentially
22:34:08 <hub_cap> it records state of what happens to async events
22:34:15 <hub_cap> and if they failed it records what went wrong
22:34:28 <hub_cap> its currently tied to /servers in nova
22:34:32 <cp16net> so this will be on api calls? ot taskmanager call?
22:34:42 <vipul> sounds like taskmanager
22:34:44 <hub_cap> api / taskmanager / etc...
22:34:54 <hub_cap> but most of the work is TM so itll be there more
22:35:02 <hub_cap> it will be in the api to report back
22:35:06 <hub_cap> when u ask for a status
22:35:19 <hub_cap> but id like to genericise it be used for all of our async actions
22:35:26 <vipul> this doesn't seem to be geared towards an end user, more of an admin
22:35:28 <hub_cap> instances, dbs, users, snaps, etc etc...
22:35:30 <SlickNik> Is this  mostly for troubleshooting, or metrics, or both?
22:35:32 <hub_cap> vipul: its both
22:35:36 <cp16net> is that a different api path or will it tie in to the status given?
22:35:50 <hub_cap> if there is a failure, the failure (short msg) can be returned in GET calls
22:36:09 <hub_cap> but its also good for admins to see whats going on, what failed and where, etc...
22:36:10 <cp16net> so sounds like a v2 api feature?
22:36:25 <SlickNik> Ah, I see...
22:37:04 <hub_cap> https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L963
22:37:08 <vipul> yea looks cool... looks like an event collector
22:37:16 <hub_cap> its exactl that vipul
22:37:18 <jrodom> i really like that concept - right now users have no capability to understand what happened and why when something fails
22:37:30 <hub_cap> w/ the option to pass back a shorthand message
22:37:38 <hub_cap> we have a lot of issues w/ what jrodom is talkin about
22:37:48 <esp1> I think I saw some of this in the rate limit stuff in nova.  it might follow different rate limit rules that the non- /server routes.
22:37:51 <hub_cap> if a resize fails it goes back to active, ram is the same as it was before, and .......
22:37:56 <vipul> yea it's sort of a blackbox (at least the task maanger)
22:38:08 <hub_cap> vipul: more like a hazy drunken grey box
22:38:17 <vipul> funny you mention that test
22:38:24 <hub_cap> that ocassionally screams at its significant other (engineering/ops)
22:38:25 <vipul> me and kagan were struggling with figuring that one out
22:38:32 <SlickNik> lol, poor resize.
22:38:33 <hub_cap> LOL exactly vipul ;)
22:38:43 <Esmute> so once you invoke an api, you are able to query on the status?
22:38:53 <hub_cap> Esmute: ya
22:38:53 <Esmute> is this what's about?
22:39:02 <hub_cap> im not sure if we will do something  like HEAD /snapshots ...
22:39:07 <vipul> i guess another thing, would this work for other resources.. i see instance-events, but what about snapshot events, etc
22:39:09 <hub_cap> or if we will jsut put it in the GET  calls
22:39:15 <hub_cap> vipul: thats the thing
22:39:24 <hub_cap> its currently tied specifically to that in nova
22:39:27 <hub_cap> but im going to make it not so
22:39:52 <vipul> cool, may be just another decorator or something
22:39:56 <Esmute> so will it do roll-over? if the task fails somewhere or close to the end..
22:39:57 <esp1> can't a lot of this info be found in web server logs?
22:39:59 <hub_cap> ya it shouldnt be more than that vipul
22:40:23 <esp1> perhaps not as easy to query?
22:40:23 <hub_cap> esp1: lol ya... let me introduce u to our engineers at the summit
22:40:34 <SlickNik> esp1, yeah but there's no way of getting to that via the API.
22:40:48 <esp1> no, that's okay.  I will take your word for it.
22:40:49 <vipul> esp1: yea you probalby could aggregate that, although i don't know if any tool will help you aggregate just events for an instance with the way it's logged now
22:41:00 <hub_cap> yup and its not always easy to pour thru 8G of logs for 1 instance...
22:41:30 <hub_cap> there is also a request-id that nova uses
22:42:01 <hub_cap> and id like to revisit that since its not working in reddwarf, even tho its in the common code...
22:42:05 <hub_cap> but thats a diff topic
22:42:08 <esp1> I see.  we had a separate system that did what vipul is describing at my last gig. (log collection and a separate api)
22:42:22 <juice> is request-id a correlation id
22:42:30 <hub_cap> juice: ya
22:42:33 <juice> so you can tie together all the requests?
22:42:38 <juice> that would be nice and handy
22:42:39 <hub_cap> no not exactly
22:42:46 <hub_cap> each separate api call can be tracked thru all the nova systems
22:42:48 <hub_cap> via its request
22:42:52 <vipul> gives you something to grep on
22:42:53 <hub_cap> when u call create, a request-id is generated
22:42:59 <hub_cap> what vipul said, ya
22:43:07 <cp16net> but get get also created a new request-id
22:43:17 <juice> that's what I was trying to convey
22:43:19 <cp16net> so its nice and unique
22:43:26 <SlickNik> makes it slightly easier to track the job life cycle through the components...
22:43:28 <hub_cap> reddwarf has that ability, and i hope we can pass that down to nova to use as well, but again, thats kinda OT,sorry for taking us down this rabbit hole
22:43:28 <vipul> get get?
22:43:41 <hub_cap> u know vipul
22:43:42 <hub_cap> the get get
22:43:54 <vipul> oh yes!
22:43:59 <vipul> THAT
22:44:16 <cp16net> lol
22:44:21 <hub_cap> i just told cp16net that he wrote that
22:44:22 <SlickNik> THAT THAT?
22:44:25 <hub_cap> yup yup
22:44:31 <cp16net> i didnt didnt understand
22:44:35 <hub_cap> LOL
22:44:36 <hub_cap> OT!!!
22:44:44 <vipul> i thoguth he meant a get request generating another internal get request
22:44:49 <vipul> but maybe read too much into it
22:44:51 <vipul> lol
22:44:51 <hub_cap> hahah lol
22:45:01 <hub_cap> get inception
22:45:05 <hub_cap> sry GET /inception
22:45:07 <vipul> nice
22:45:19 <hub_cap> ok so does everyone see whats going on w/ the instance actons?
22:45:24 <hub_cap> soon to be renamed to something more generic
22:45:27 <vipul> yep, +1
22:45:31 <robertmyers> +1
22:45:32 <SlickNik> yeah, me likey.
22:45:34 <hub_cap> woot
22:45:41 <hub_cap> ill begin work on it shortly
22:45:42 <juice> can you start over?
22:45:43 <hub_cap> it might take a bit tho
22:45:48 <hub_cap> LOL juice
22:45:49 <juice> jk
22:46:03 <hub_cap> im going to try to push tihs to oslo
22:46:07 <hub_cap> what generic bits i can
22:46:16 <hub_cap> ill first get it working w/ reddwarf to some extent
22:46:17 <vipul> your'e brave
22:46:25 <hub_cap> hey ive mod'd common like 4 or 5 times :D
22:46:33 <cp16net> gl...
22:46:33 <hub_cap> err oslo
22:46:51 <vipul> while you're there
22:47:00 <vipul> how about implementing 'queue delete' :)
22:47:12 <SlickNik> lol
22:47:31 <cp16net> lol
22:47:32 <vipul> i know cp16net added that as a bug.. i don't think they ever fixed it
22:47:37 <cp16net> nope
22:47:42 <hub_cap> sure why not!!
22:47:44 <vipul> massive leak once we deploy this thing
22:48:03 <hub_cap> trust me vipul we are already hemorrhaging
22:48:14 <cp16net> #link https://bugs.launchpad.net/oslo/+bug/1097482
22:48:19 <hub_cap> yes i googled the spelling for that
22:48:20 <cp16net> there ya go
22:48:36 <hub_cap> we could easily make the calls ourself honestly
22:48:36 <vipul> TMI
22:48:40 <hub_cap> lol vipul
22:48:51 <hub_cap> but ill work w/ markmc to see how we can get that in oslo
22:48:53 <vipul> you mean put it in reddwarf?
22:48:57 <vipul> yea that's an option
22:48:58 <hub_cap> ya vipul if we have to
22:49:13 <hub_cap> the codes all there in the common stuff, we just need to string a few things together and call .delete()
22:49:39 <hub_cap> so moving on?
22:49:42 <vipul> cool
22:49:43 <vipul> yep
22:49:46 <hub_cap> #topic Snapshots Blueprint Feedback
22:50:06 <vipul> Ok, so status detail is out of the API
22:50:18 <vipul> we will return simple status SUCCESS/FAIL etc
22:50:27 <hub_cap> i, for one, welcome our new snapshot overlords
22:50:53 <SlickNik> And we have instance actions if we want to get some sort of status detail, right?
22:51:03 <vipul> the ACL piece we can punt on that.. that's really if we want to set up the Swift Containers with ACL so certain users could PUt and other could only read
22:51:15 <SlickNik> (or at least that's the plan...)
22:51:27 <vipul> Yea, we'll add the deatils piece when we have a solution for everything else
22:51:27 <hub_cap> SlickNik: correct
22:51:46 <vipul> but if you guys think it's good to go as-is now... we're going to start implementing next week
22:52:02 <vipul> Oh.. one other thing
22:52:04 <SlickNik> sounds good Vipul.
22:52:05 <vipul> snapshots could take a while
22:52:12 <vipul> with the guest bieng single threaded
22:52:17 <hub_cap> eww
22:52:21 <hub_cap> that could be painful
22:52:24 <vipul> do we want to spawn a thread to perform?
22:52:40 <hub_cap> if it makes sense (and im sure it does) we should
22:52:51 <SlickNik> I think we should.
22:52:52 <robertmyers> or a subprocess
22:52:54 <hub_cap> the reporting should be fun w/ >1 thread thats for sure
22:53:03 <vipul> subprocess doesn't block right?
22:53:06 * hub_cap defers to robertmyers for correct pythonical-ness
22:53:26 <vipul> i just want to make sure we can serve other requests..
22:53:31 <robertmyers> it can block if you need it to
22:54:03 <vipul> ok, but you can also have it just make a callback?
22:54:27 <robertmyers> yes
22:54:43 <vipul> alright, let's do that
22:54:51 <vipul> do we deploy Swift in Redstack?
22:54:59 <hub_cap> not yet :D
22:55:03 <vipul> may need to toggle a few things to get all that up
22:55:18 <hub_cap> def
22:55:28 <hub_cap> at first when u said its gonna take a while
22:55:33 <hub_cap> i thought u meant to finsih the feature
22:55:37 <vipul> heh
22:55:39 <hub_cap> and im beginning to think u meant both
22:55:47 <vipul> yes :)
22:55:50 <hub_cap> hehe
22:55:53 <SlickNik> heh
22:55:56 <vipul> appreciate help on this :)
22:56:08 <hub_cap> robertmyers: is your man vipul hes a pymaniac
22:56:15 <hub_cap> err maniac.py
22:56:20 <robertmyers> nice
22:56:29 <vipul> sweet, maybe robermeyers can help us with the agent work
22:56:45 <SlickNik> py-ro(bert)-maniac even :)
22:56:51 <hub_cap> i can def talk to the powers that be
22:56:54 <hub_cap> #agreed SlickNik
22:57:14 <robertmyers> I'm working on backups on our side too
22:57:27 <robertmyers> err snapshots
22:57:30 <vipul> LVM?
22:57:37 <vipul> or are you guys doing xtrabackup
22:57:47 <robertmyers> that is the plan
22:57:57 <robertmyers> xtrabackup
22:58:03 <hub_cap> xxxtra
22:58:07 <cp16net> read all about it?
22:58:09 <hub_cap> its dirty
22:58:12 <hub_cap> LOL cp16net
22:58:17 <vipul> ok.. well you guys could just do it in stackforge :D
22:58:46 <robertmyers> the problem is our guest is in c++ ;)
22:58:55 <vipul> oh yea cap
22:58:58 <vipul> crap
22:59:11 <hub_cap> phone... brb
22:59:28 <SlickNik> oh, yeah. There's that...
22:59:46 <vipul> ok well we'll lean on you for the guestagent work
22:59:53 <robertmyers> but all of the api stuff we can help on
23:00:11 <robertmyers> i'll be here for that too
23:00:15 <vipul> ok sound good
23:00:48 * robertmyers wants to replace our guest agent with python
23:01:15 <vipul> you should.. the current implemetnation can't be that bad..
23:01:17 <yidclare> do it!
23:01:24 <yidclare> :)
23:01:31 <robertmyers> well, it is the memory overhead
23:01:44 <robertmyers> on openvz that is holding us back
23:01:44 <vipul> do you guys really have customers running tinys?
23:01:54 <robertmyers> yes, lots of them
23:02:08 <vipul> interesting.. maybe it makes sense then
23:02:44 <vipul> is hub_cap coming back?
23:02:49 <SlickNik> not sure.
23:02:54 <cp16net> hes walking about
23:03:03 <vipul> Ok we can move on to the next topic..
23:03:08 <SlickNik> We can move on to the next item, but it's what he was working on. :)
23:03:09 <vipul> anything else on Snapshots?
23:03:31 <cp16net> nope
23:03:33 <vipul> #topic API Spec Update
23:03:42 <vipul> that didn't work
23:03:44 <SlickNik> nope, just excited to get it going.
23:03:47 <cp16net> lol only he can do it
23:03:51 <hub_cap> #topic API Spec update
23:03:53 <hub_cap> I HAVE THE POWER
23:03:55 <SlickNik> he's back!
23:04:01 <SlickNik> lol
23:04:15 <hub_cap> sorry getting furniture delivered
23:04:19 <hub_cap> had to take a call
23:04:22 <hub_cap> so spe
23:04:23 <hub_cap> c
23:04:36 * lifeless wants to start quoting Labyrinth now
23:04:46 <SlickNik> No worries. Nice work on the API docs, btw!
23:04:56 <hub_cap> lifeless: only if you can do the glass crystal ball trick
23:05:11 <hub_cap> behold the power of markdown
23:05:14 <hub_cap> #link https://github.com/stackforge/database-api/blob/master/openstack-database-api/src/markdown/database-api-v1.md
23:05:49 <vipul> nice..
23:06:05 <juice> ooh la la
23:06:08 <juice> very pretty
23:06:22 <hub_cap> yup to be cheesy and to satisfy lifeless' need, the markdown has no power over me
23:06:26 <esp1> cool
23:06:48 * lifeless snorts
23:06:59 <SlickNik> also, who/what is demouser? :)
23:07:13 <vipul> demo-user?
23:07:14 <vipul> heh
23:07:21 <hub_cap> demouser?
23:07:39 <hub_cap> LOL our example generator used that
23:07:41 <SlickNik> oh! parse fail…I read that as de-mouser…!?!
23:07:47 <hub_cap> its a spanish mouse
23:07:52 <cp16net> lol
23:07:55 <vipul> lol
23:08:01 <cp16net> isnt that the kid from matrix?
23:08:05 <cp16net> mouse
23:08:05 <juice> or french
23:08:11 <hub_cap> lol
23:08:48 <hub_cap> so since thats done id lke to see us push to this for api changes/adds
23:09:01 <hub_cap> as in, if you are going ot work on snapshots (hint hint) lets get that api sorted out up front
23:09:24 <vipul> yep that'll work
23:09:49 <hub_cap> cool. we can discuss changes to it via the review process
23:09:49 <SlickNik> I can push changes to this for the SecGroups extension.
23:09:54 <hub_cap> perfect SlickNik
23:10:03 <hub_cap> itll be a bit more permanent than the wiki
23:10:11 <SlickNik> sounds good.
23:10:28 <hub_cap> okey anything else to chat about wrt api?
23:10:40 <SlickNik> nope; good by me.
23:10:49 <hub_cap> #topic Open Discussion
23:10:53 <SlickNik> wait wait
23:11:00 <SlickNik> refresh the page :P
23:11:00 <vipul> you skipped
23:11:12 <SlickNik> CI discussion.
23:11:24 <hub_cap> wat?!?!?
23:11:27 <hub_cap> oyaaaaa
23:11:33 <hub_cap> #topic CI discussion
23:11:41 <hub_cap> lets chat ci
23:12:26 <SlickNik> So turns out that Openstack CI Jenkins is already overloaded with Jenkins jobs.
23:12:51 <hub_cap> sounds liek a excellent reason to create a stackforge jenkins :P
23:12:53 <SlickNik> And they don't want to have Stackforge projects' Jenkins jobs adding to this load.
23:13:20 <vipul> precisely what the plan is i think
23:13:23 <hub_cap> sweet
23:13:31 <cp16net> sounds like a great idea
23:13:33 <vipul> if mordred is around.. maybe he can chime in..
23:13:40 <hub_cap> ya im sure itll take some time tho
23:13:42 <mordred> hey
23:13:47 <SlickNik> Hey mordred.
23:13:52 <hub_cap> and they will need to split the codebase for config or key them to specific machines?
23:13:53 <vipul> stackforge ci mordred
23:13:57 <mordred> it's not quite creating a stackforge jenkins
23:14:01 <hub_cap> 3 peas in a pod vipul?
23:14:05 <mordred> because we had one of those, and it was a bit of a nightmare
23:14:07 <mordred> BUT
23:14:12 <hub_cap> drumroll
23:14:15 <SlickNik> We were just talking about the CI plan for stackforge projects.
23:14:28 <mordred> it is about using existing jenkins resources that HP and Rackspace have (hp at first, because I'm there and it's easy)
23:14:37 <mordred> to respond to and drive things in OpenStack Gerrit
23:14:46 <mordred> it's gonna be sexy - you're all going to love it
23:14:56 <mordred> and if you don't, I'll just have you killed
23:15:08 <vipul> just more VMs?
23:15:13 <hub_cap> werent u going to eventually do that anyway mordred?
23:15:14 <mordred> nope.
23:15:29 <mordred> oops. two different htings ...
23:15:31 * hub_cap assumes mordred answerd my question
23:15:35 <hub_cap> :P
23:15:50 * hub_cap chants "i will live another day!!"
23:15:50 <mordred> vipul: more vpns doesn't help, becuase it's the jenkins master that's overloaded, because jenkins can't handle openstack's load
23:16:09 <mordred> hub_cap: kinda - this is going to be a great example of partial third-party testing support
23:16:16 <mordred> we've had docs on it for a while
23:16:19 <mordred> but nobody has stepped up
23:16:33 <hub_cap> mordred: i was referring ot the kill us statement ;)
23:16:34 <mordred> I'm hoping if _I_ step up with an example, soeone else will do :)
23:16:38 <mordred> hub_cap: oh!
23:16:40 <mordred> that
23:16:42 <mordred> yes
23:16:44 <mordred> I mean, really
23:16:56 * hub_cap knew it was too good to be true
23:16:57 <mordred> don't open the door if you don't know the person outside
23:17:05 <hub_cap> spoken like a true texan
23:17:19 <cp16net> i'll open the door with my shotgun :-P
23:17:44 <vipul> now that's real texan
23:17:49 <hub_cap> #agreed
23:18:07 <hub_cap> okey so that sounds like a good plan mordred, maybe we can help once u pony up a bit and we see an example
23:18:17 <mordred> hub_cap: ++
23:18:54 <vipul> we'll have to live with trustign that we ran int-tests for a little while longer :)
23:19:00 <hub_cap> yup vipul
23:19:06 <hub_cap> sux but thats ok
23:19:09 <cp16net> :-/
23:19:11 <cp16net> yeah
23:19:15 <hub_cap> in int-tests we trust
23:19:22 <hub_cap> rimshot!
23:19:25 <hub_cap> sounds like weve covered ci, open discussion time (this time for real)
23:19:55 <hub_cap> #topic open discussion
23:20:24 <vipul> i can't think of anything else i missed
23:20:24 <cp16net> *crickets*
23:20:25 <SlickNik> That was pretty much all I had to cover...
23:20:28 <SlickNik> good from me.
23:20:49 <cp16net> yeah i am good.
23:21:01 <vipul> feels like we didn't do too well on accumiulating action items
23:21:05 <vipul> we'll see
23:21:10 <cp16net> time to go home
23:21:25 <SlickNik> #action Add Security groups extension API to API docs.
23:21:34 <SlickNik> #action SlickNik to add Security groups extension API to API docs.
23:21:38 <SlickNik> here's one I missed.
23:21:49 <cp16net> #action cp16net go home
23:21:57 <SlickNik> heh
23:21:58 <cp16net> i'm going to work on that one right now :-P
23:22:06 <esp1> see ya cp16net :)
23:22:07 <cp16net> see yall
23:22:09 <SlickNik> later cp16net :)
23:22:11 <vipul> later
23:22:20 <vipul> ok that's a wrap
23:22:50 <hub_cap> #endmeeting