21:59:43 #startmeeting reddwarf 21:59:44 Meeting started Tue Feb 26 21:59:43 2013 UTC. The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:59:45 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:59:47 The meeting name has been set to 'reddwarf' 22:00:19 yo 22:00:23 stragglers 22:00:23 howdie 22:00:29 hey 22:00:52 #link http://eavesdrop.openstack.org/meetings/reddwarf/2013/reddwarf.2013-02-19-22.00.html 22:01:03 yup lets give a sec for stragglahz 22:01:09 w00t. 22:01:28 i gotta say, this project is much better in part due to these meetings 22:01:41 hello 22:01:56 now if we could only decrease the silence at times in #reddwarf :) 22:02:01 Something to look forward to on Tuesdays... 22:02:13 SlickNik: tuesday is such a fun day now! 22:02:20 #topic Update to Action items 22:02:32 Lets go w/ SlickNik and stevedore 22:02:33 I'm up first. 22:02:34 present 22:02:38 lol juice 22:02:48 past 22:02:53 we did roll call already yer late juice 22:03:00 shiat 22:03:13 down to attendance office I go 22:03:18 I looked into it, looks good. 22:03:48 cool. we might want to submit a bp to move toward using that for extensions so we can add/remove at will 22:03:53 Started working on using the current extension framework for Security Groups for now so I can get that done, will look into porting to stevedore after I'm done with coding it up. 22:03:59 perfect 22:04:15 hows teh vmgate w/ the client coming SlickNik? 22:04:30 Can we schedule some time to talk about it after action items. 22:04:39 There's a bit to discuss on that. 22:04:40 Yea, that one hit a snag 22:04:49 sure! edit the wiki SlickNik 22:04:59 Okay, I'll put that in there. 22:05:03 Let's keep going. 22:05:05 lets move to kagan and the heartbeat 22:05:12 i saw the 'pass' in the code for that 22:05:21 is that the _only_ way to fix it? 22:05:34 what do you mean? 22:05:42 well i saw 2 things in the review 22:05:46 i think the pass was somewhere else ... 22:05:54 i think the resize || shutdown is a-ok 22:06:02 i've added the "option" to go through shutdown as well 22:06:07 yea the fix was to change the test 22:06:23 ok cool 22:06:31 so, if the flow passes through shutdown it's good, and it's good if it doesn't as long as it reaches up at the end 22:06:36 or running or whatever ... 22:06:54 i've got all unit test running well now 22:07:01 nice! 22:07:02 still not checked it. need to tidy a bit 22:07:10 kagan: have u made sure the old image still works w/ that new flow? 22:07:18 i have before 22:07:23 will do so again today 22:07:25 to be sure ! 22:07:26 sweet 22:07:30 sure is good :) 22:07:42 we still have some issues 22:07:44 also i left some comments on your review to look @ 22:07:48 i don't want to hold the checkin for it 22:07:51 sure 22:08:04 so my turn 22:08:13 ive looked @ constant snaps. it llooks good 22:08:17 the only thing i didnt like was the status 22:08:25 status detail? 22:08:28 and ive put a item to talk about it on the agenda today 22:08:29 ya vipul 22:08:37 got rid of that 22:08:40 ya 22:08:49 i meant in genral that was my only gripe from last wk 22:08:55 rest looks good 22:09:01 k, let's discuss it during the topic 22:09:10 ive got a nice way to get all our statii in reddwarf 22:09:14 u like how i did that? :P 22:09:25 heh 22:09:41 on to u vipul 22:09:54 i added a section ot the wiki design for snapshots 22:10:03 bascially called out how we're going ot store to swift, and require the role 22:10:14 nice 22:10:16 shoot forgot about handling deletes out of band 22:10:33 #action vipul to update snapshots wiki to call our deletes of snapshots from swift 22:11:01 we can go crazy later i htink with the ACL stuff 22:11:02 ive filed a BP for the next action item (https://blueprints.launchpad.net/reddwarf/+spec/instance-actions) it sux right now but ill fill it out 22:11:04 i might consider removing that 22:11:07 okey 22:11:26 what is this about 22:11:38 and we can chat more about that BP when its up 22:11:46 ok 22:11:47 detailed status messages for async operations vipul 22:12:14 nice 22:12:17 vipul: u said u removed the statusDetail 22:12:22 so we can breeze over that 22:12:29 From the API, yes.. we thought about not doing it now 22:12:32 and ive got a good way to handle them in generically 22:12:36 in a generic fashipn 22:12:37 cool 22:12:39 correct vipul 22:12:49 jcru is out today 22:13:00 #action SlickNik working on Secgroups as extension, will look at porting to stevedore after done with that. 22:13:01 his sis had a baby!! 22:13:04 missed that earlier. 22:13:23 im not sure hes had time to work on the monster 22:13:23 what, he can't work on FSM at the same time? 22:13:32 LOL ill send him the message for u vipul :P 22:13:56 hah 22:14:08 okey. done w/ the AI's 22:14:23 lets talk quotas/limits 22:14:25 #topic Quotas / Limits Updates 22:14:48 esp1? 22:14:53 Well.. i just submitted a patch about 5 mins ago 22:14:53 yep 22:14:54 esmute / esp1? 22:15:05 this new patch has the review incorporated by robertmyers 22:15:06 Esmute, is this the last one? 22:15:10 hopefully 22:15:33 Esmute: looking at it now, looks good so far 22:15:33 awesome, let's try to get it merged?... 22:15:45 i just need to sweet-talk the rax guys 22:15:51 so with any luck after quota's gets merged I will rebase 22:16:03 and tie the quota's stuff into the api call 22:16:25 Yea, it could go in as two patches if it slows things down 22:16:41 if we don't it merged today, we can add the 'usage' stuff later no? 22:17:18 sure 22:17:18 i think im gonna still be a stickler on the f.__name__ thing, but we can discuss later 22:17:56 i do want to say, awesome work by both of you, im looking forward to having these in reddwarf 22:18:02 ok lets discuss after the meeting 22:18:03 #kudos 22:18:07 def Esmute 22:18:07 hear hear... 22:18:23 this might not be on our agenda next week :D :D 22:18:33 thx, hard work is done by Esmute 22:18:42 as well as the next one!! (are we all done w/ this topic?) 22:18:55 thanks guys.. got a lot of helps from the guys here too... and thanks for the reviews too 22:19:07 np! we <3 reviewing code!! 22:19:16 Sounds like it. What's up next? 22:19:36 #topic Percona Image Updates 22:19:42 kagan: go! 22:19:56 i thought we did that before ... 22:20:07 so what's missing for the update? 22:20:21 https://review.openstack.org/#/c/21557/ 22:20:27 https://review.openstack.org/#/c/21261/ 22:20:31 the first "official" commit will be happening today 22:20:42 with all unit test passing and int tests passing on mysql and percona 22:20:46 kagan: not sure if there is anythign left to discuss honestly 22:20:46 however ... 22:21:00 #link https://review.openstack.org/#/c/21557/ 22:21:02 the plot thickens 22:21:04 #link https://review.openstack.org/#/c/21261/ 22:21:05 there will be a list of things i'll make that i think we'd like to address in a second pass 22:21:12 hub_cap, cp16net: can you guys pull those in, and run int-tests with --percona? 22:21:19 vipul: ya def 22:21:28 just get a 2nd pair of eyes 22:21:29 id prefer to _not_ have a flag like that tho 22:21:33 wouldn't you all prefer to wait for toddy's drop? 22:21:37 we can discuss that now we are doin good on time 22:21:39 vipul: sure 22:21:41 about the flag 22:21:43 we already have a kick-start mysql 22:21:52 cant we just do a kick-start mysql-percona 22:21:52 we started with having mysql-percona 22:21:59 and then moved to mysql --percona 22:22:07 was there issues w/ that first approach? 22:22:20 i'm not so familiar with the kick-start. what is it? 22:22:24 the issue is we want the 'type' to be 'mysql' since that's what the API / int tests all invoke 22:22:26 well, a bit 22:22:31 when uploaded to glance / service_imags 22:22:32 kick-start is just a wrapper for a few things kagan 22:22:33 it just runs through the setup 22:22:36 especially since "what vipul just said" 22:22:42 vipul: ya it wa slooking for "mysql" right? 22:22:45 *was 22:22:54 kagan: it is a wrapper that just runs a few actions in sequence... 22:22:58 thats the 'type' in the code 22:23:03 instead of running build/build-image/intialize 22:23:12 i see 22:23:13 its just a shortcut if you will 22:23:13 when we move to different mysql versions, the type will be changing from mysql to mysql-percona as per the blueprint 22:23:13 right, so either we have to extract the 'mysql' portion of 'percona-mysql' prior to upload 22:23:28 sure thats just for the default vipul 22:23:30 then it should also work still, just pass the extra option 22:23:33 Yea, that's what i was telling kagan, that we could leave it as-is now.. 22:23:45 technically u can already pass type: percona-mysql in the create 22:23:49 and when the types are implemented that's when we add percona as a separet thing 22:24:01 i didn't check cause i didn't know how but i think i've modified the place in code where the kickstart would pass the extra option 22:24:11 https://github.com/stackforge/reddwarf/blob/master/reddwarf/instance/service.py#L176 22:24:29 if u pass service_type: percona-mysql (or whatever u call it) itll find that glance image 22:24:29 right, and i think event eh reddwarfclient hardcodes it 22:24:35 ok thats a fail 22:24:38 but we can fix it 22:24:45 for now, if we add a new type - mysql-percona - we'd still need to strip it before uploading image to glance 22:24:49 id prefer to see us use this as a different "type" 22:24:55 right, so for now, we stick it calling it 'mysql' and when we support multiple types we break it out 22:24:58 why would we need to strip it? 22:25:09 becuase of all the touch points 22:25:12 so things doesn't break later 22:25:43 since this area is about to be modified anyway, why not leave it as is now and modify once we actually support multiple types? 22:25:45 the glance image is _only_ used during the create 22:25:58 but it also registered the type of service there 22:26:04 well it doesnt need to be modified currently 22:26:07 its just a fialure of the client 22:26:09 *Failure 22:26:24 and modifications to the tests? 22:26:43 why not to leave it as is for now? 22:26:44 no way to test the percona version 22:26:50 we can set the default to a flaggable external 22:27:04 again, what's wrong with how it's now? 22:27:13 kagan: its not using the system as it should be 22:27:18 why? 22:27:24 reddwarf is built to use different service_types 22:27:31 mysql-percona is a different service type 22:27:49 basically u can (wiht a bit of modification to reddwarf and the client) make it so it already works for multiple types 22:27:59 i thought it was supposed to be the same type of service - dbaas 22:28:00 instead of hacking it to suit the reddwarf codebase 22:28:13 id rather see us fix reddwarf than hack around it, make sense? 22:28:15 yea, it might be a simple enough change.. 22:28:19 vipul: it def would be 22:28:25 itll be a flag in teh test conf 22:28:33 and we can pass in mysql or mysql-percona depending on what we want to test 22:28:39 and fix the client of course! 22:28:49 yea 22:28:56 so then we wont have to mod redstack at all and we will have it already working w/ multiple types 22:28:58 kagan, i can help you work through those 22:29:12 so itll be getting us closer to accomplishing https://wiki.openstack.org/wiki/Reddwarf-versions-types 22:29:16 ok. so maybe it won't all be checked in today ... 22:29:19 rather than going around it 22:29:29 yea, i wanted to push this off until that BP 22:29:48 Since currently we only really support one service type 22:29:52 i just need a decision 22:29:52 ya but is it really necesary? if its already implemented 99%? 22:30:05 well the tests only support 1 type 22:30:09 cuz we hardcoded it 22:30:13 Question, is it possible to run integration tests with different service types? (I think Vipul may have hinted at this earlier) 22:30:18 If not, it's probably something we should bug and fix... 22:30:19 not yet 22:30:20 but hell we only supported one apt version of mysql before kagan fixed it 22:30:29 that's true. 22:30:43 ok let's do it.. it's only a couple of things we'd have to change 22:30:53 vipul: lets take like ~30 min to look @ it 22:30:57 if its like 3 days mreo work we will can it for now 22:31:06 i dont want to give more work for the sake of giving more work 22:31:07 works for me 22:31:10 but i do want it to work cleanly 22:31:18 <3 22:31:22 so bottom line is? 22:31:33 that we'll have the bottom line later? 22:31:42 lol 22:31:43 vipul and i will look @ it for a bit and we will get back to u kagan 22:31:45 kagan: hub_cap and I will look at what's required and make a call later 22:31:51 my inkkling is that its not a hard change 22:32:02 ok 22:32:17 let me know if you want me in in that discussion. 22:32:25 kagan: of course we do!!!! 22:32:30 might save time for vipul to bring me up to speed later … ;) 22:32:44 yep, let's start looking at the code right after this meeting 22:32:50 ok 22:32:53 #action vipul hub_cap and kagan to look into making the service_type code work properly w/ percona image 22:32:55 we cna talk on #reddwarf afterwards 22:32:57 def vipul 22:32:58 anything else on percona stuff ? 22:33:12 nope were good thx guys 22:33:14 #topic Instance Actions 22:33:26 ok so this sucker is in nova today 22:33:38 #link https://github.com/openstack/nova/commit/250230b32364b1e36ae6d62ec4bb8c3285c59401 22:33:41 What is this? 22:33:55 its a tracker for calls essentially 22:34:08 it records state of what happens to async events 22:34:15 and if they failed it records what went wrong 22:34:28 its currently tied to /servers in nova 22:34:32 so this will be on api calls? ot taskmanager call? 22:34:42 sounds like taskmanager 22:34:44 api / taskmanager / etc... 22:34:54 but most of the work is TM so itll be there more 22:35:02 it will be in the api to report back 22:35:06 when u ask for a status 22:35:19 but id like to genericise it be used for all of our async actions 22:35:26 this doesn't seem to be geared towards an end user, more of an admin 22:35:28 instances, dbs, users, snaps, etc etc... 22:35:30 Is this mostly for troubleshooting, or metrics, or both? 22:35:32 vipul: its both 22:35:36 is that a different api path or will it tie in to the status given? 22:35:50 if there is a failure, the failure (short msg) can be returned in GET calls 22:36:09 but its also good for admins to see whats going on, what failed and where, etc... 22:36:10 so sounds like a v2 api feature? 22:36:25 Ah, I see... 22:37:04 https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L963 22:37:08 yea looks cool... looks like an event collector 22:37:16 its exactl that vipul 22:37:18 i really like that concept - right now users have no capability to understand what happened and why when something fails 22:37:30 w/ the option to pass back a shorthand message 22:37:38 we have a lot of issues w/ what jrodom is talkin about 22:37:48 I think I saw some of this in the rate limit stuff in nova. it might follow different rate limit rules that the non- /server routes. 22:37:51 if a resize fails it goes back to active, ram is the same as it was before, and ....... 22:37:56 yea it's sort of a blackbox (at least the task maanger) 22:38:08 vipul: more like a hazy drunken grey box 22:38:17 funny you mention that test 22:38:24 that ocassionally screams at its significant other (engineering/ops) 22:38:25 me and kagan were struggling with figuring that one out 22:38:32 lol, poor resize. 22:38:33 LOL exactly vipul ;) 22:38:43 so once you invoke an api, you are able to query on the status? 22:38:53 Esmute: ya 22:38:53 is this what's about? 22:39:02 im not sure if we will do something like HEAD /snapshots ... 22:39:07 i guess another thing, would this work for other resources.. i see instance-events, but what about snapshot events, etc 22:39:09 or if we will jsut put it in the GET calls 22:39:15 vipul: thats the thing 22:39:24 its currently tied specifically to that in nova 22:39:27 but im going to make it not so 22:39:52 cool, may be just another decorator or something 22:39:56 so will it do roll-over? if the task fails somewhere or close to the end.. 22:39:57 can't a lot of this info be found in web server logs? 22:39:59 ya it shouldnt be more than that vipul 22:40:23 perhaps not as easy to query? 22:40:23 esp1: lol ya... let me introduce u to our engineers at the summit 22:40:34 esp1, yeah but there's no way of getting to that via the API. 22:40:48 no, that's okay. I will take your word for it. 22:40:49 esp1: yea you probalby could aggregate that, although i don't know if any tool will help you aggregate just events for an instance with the way it's logged now 22:41:00 yup and its not always easy to pour thru 8G of logs for 1 instance... 22:41:30 there is also a request-id that nova uses 22:42:01 and id like to revisit that since its not working in reddwarf, even tho its in the common code... 22:42:05 but thats a diff topic 22:42:08 I see. we had a separate system that did what vipul is describing at my last gig. (log collection and a separate api) 22:42:22 is request-id a correlation id 22:42:30 juice: ya 22:42:33 so you can tie together all the requests? 22:42:38 that would be nice and handy 22:42:39 no not exactly 22:42:46 each separate api call can be tracked thru all the nova systems 22:42:48 via its request 22:42:52 gives you something to grep on 22:42:53 when u call create, a request-id is generated 22:42:59 what vipul said, ya 22:43:07 but get get also created a new request-id 22:43:17 that's what I was trying to convey 22:43:19 so its nice and unique 22:43:26 makes it slightly easier to track the job life cycle through the components... 22:43:28 reddwarf has that ability, and i hope we can pass that down to nova to use as well, but again, thats kinda OT,sorry for taking us down this rabbit hole 22:43:28 get get? 22:43:41 u know vipul 22:43:42 the get get 22:43:54 oh yes! 22:43:59 THAT 22:44:16 lol 22:44:21 i just told cp16net that he wrote that 22:44:22 THAT THAT? 22:44:25 yup yup 22:44:31 i didnt didnt understand 22:44:35 LOL 22:44:36 OT!!! 22:44:44 i thoguth he meant a get request generating another internal get request 22:44:49 but maybe read too much into it 22:44:51 lol 22:44:51 hahah lol 22:45:01 get inception 22:45:05 sry GET /inception 22:45:07 nice 22:45:19 ok so does everyone see whats going on w/ the instance actons? 22:45:24 soon to be renamed to something more generic 22:45:27 yep, +1 22:45:31 +1 22:45:32 yeah, me likey. 22:45:34 woot 22:45:41 ill begin work on it shortly 22:45:42 can you start over? 22:45:43 it might take a bit tho 22:45:48 LOL juice 22:45:49 jk 22:46:03 im going to try to push tihs to oslo 22:46:07 what generic bits i can 22:46:16 ill first get it working w/ reddwarf to some extent 22:46:17 your'e brave 22:46:25 hey ive mod'd common like 4 or 5 times :D 22:46:33 gl... 22:46:33 err oslo 22:46:51 while you're there 22:47:00 how about implementing 'queue delete' :) 22:47:12 lol 22:47:31 lol 22:47:32 i know cp16net added that as a bug.. i don't think they ever fixed it 22:47:37 nope 22:47:42 sure why not!! 22:47:44 massive leak once we deploy this thing 22:48:03 trust me vipul we are already hemorrhaging 22:48:14 #link https://bugs.launchpad.net/oslo/+bug/1097482 22:48:19 yes i googled the spelling for that 22:48:20 there ya go 22:48:36 we could easily make the calls ourself honestly 22:48:36 TMI 22:48:40 lol vipul 22:48:51 but ill work w/ markmc to see how we can get that in oslo 22:48:53 you mean put it in reddwarf? 22:48:57 yea that's an option 22:48:58 ya vipul if we have to 22:49:13 the codes all there in the common stuff, we just need to string a few things together and call .delete() 22:49:39 so moving on? 22:49:42 cool 22:49:43 yep 22:49:46 #topic Snapshots Blueprint Feedback 22:50:06 Ok, so status detail is out of the API 22:50:18 we will return simple status SUCCESS/FAIL etc 22:50:27 i, for one, welcome our new snapshot overlords 22:50:53 And we have instance actions if we want to get some sort of status detail, right? 22:51:03 the ACL piece we can punt on that.. that's really if we want to set up the Swift Containers with ACL so certain users could PUt and other could only read 22:51:15 (or at least that's the plan...) 22:51:27 Yea, we'll add the deatils piece when we have a solution for everything else 22:51:27 SlickNik: correct 22:51:46 but if you guys think it's good to go as-is now... we're going to start implementing next week 22:52:02 Oh.. one other thing 22:52:04 sounds good Vipul. 22:52:05 snapshots could take a while 22:52:12 with the guest bieng single threaded 22:52:17 eww 22:52:21 that could be painful 22:52:24 do we want to spawn a thread to perform? 22:52:40 if it makes sense (and im sure it does) we should 22:52:51 I think we should. 22:52:52 or a subprocess 22:52:54 the reporting should be fun w/ >1 thread thats for sure 22:53:03 subprocess doesn't block right? 22:53:06 * hub_cap defers to robertmyers for correct pythonical-ness 22:53:26 i just want to make sure we can serve other requests.. 22:53:31 it can block if you need it to 22:54:03 ok, but you can also have it just make a callback? 22:54:27 yes 22:54:43 alright, let's do that 22:54:51 do we deploy Swift in Redstack? 22:54:59 not yet :D 22:55:03 may need to toggle a few things to get all that up 22:55:18 def 22:55:28 at first when u said its gonna take a while 22:55:33 i thought u meant to finsih the feature 22:55:37 heh 22:55:39 and im beginning to think u meant both 22:55:47 yes :) 22:55:50 hehe 22:55:53 heh 22:55:56 appreciate help on this :) 22:56:08 robertmyers: is your man vipul hes a pymaniac 22:56:15 err maniac.py 22:56:20 nice 22:56:29 sweet, maybe robermeyers can help us with the agent work 22:56:45 py-ro(bert)-maniac even :) 22:56:51 i can def talk to the powers that be 22:56:54 #agreed SlickNik 22:57:14 I'm working on backups on our side too 22:57:27 err snapshots 22:57:30 LVM? 22:57:37 or are you guys doing xtrabackup 22:57:47 that is the plan 22:57:57 xtrabackup 22:58:03 xxxtra 22:58:07 read all about it? 22:58:09 its dirty 22:58:12 LOL cp16net 22:58:17 ok.. well you guys could just do it in stackforge :D 22:58:46 the problem is our guest is in c++ ;) 22:58:55 oh yea cap 22:58:58 crap 22:59:11 phone... brb 22:59:28 oh, yeah. There's that... 22:59:46 ok well we'll lean on you for the guestagent work 22:59:53 but all of the api stuff we can help on 23:00:11 i'll be here for that too 23:00:15 ok sound good 23:00:48 * robertmyers wants to replace our guest agent with python 23:01:15 you should.. the current implemetnation can't be that bad.. 23:01:17 do it! 23:01:24 :) 23:01:31 well, it is the memory overhead 23:01:44 on openvz that is holding us back 23:01:44 do you guys really have customers running tinys? 23:01:54 yes, lots of them 23:02:08 interesting.. maybe it makes sense then 23:02:44 is hub_cap coming back? 23:02:49 not sure. 23:02:54 hes walking about 23:03:03 Ok we can move on to the next topic.. 23:03:08 We can move on to the next item, but it's what he was working on. :) 23:03:09 anything else on Snapshots? 23:03:31 nope 23:03:33 #topic API Spec Update 23:03:42 that didn't work 23:03:44 nope, just excited to get it going. 23:03:47 lol only he can do it 23:03:51 #topic API Spec update 23:03:53 I HAVE THE POWER 23:03:55 he's back! 23:04:01 lol 23:04:15 sorry getting furniture delivered 23:04:19 had to take a call 23:04:22 so spe 23:04:23 c 23:04:36 * lifeless wants to start quoting Labyrinth now 23:04:46 No worries. Nice work on the API docs, btw! 23:04:56 lifeless: only if you can do the glass crystal ball trick 23:05:11 behold the power of markdown 23:05:14 #link https://github.com/stackforge/database-api/blob/master/openstack-database-api/src/markdown/database-api-v1.md 23:05:49 nice.. 23:06:05 ooh la la 23:06:08 very pretty 23:06:22 yup to be cheesy and to satisfy lifeless' need, the markdown has no power over me 23:06:26 cool 23:06:48 * lifeless snorts 23:06:59 also, who/what is demouser? :) 23:07:13 demo-user? 23:07:14 heh 23:07:21 demouser? 23:07:39 LOL our example generator used that 23:07:41 oh! parse fail…I read that as de-mouser…!?! 23:07:47 its a spanish mouse 23:07:52 lol 23:07:55 lol 23:08:01 isnt that the kid from matrix? 23:08:05 mouse 23:08:05 or french 23:08:11 lol 23:08:48 so since thats done id lke to see us push to this for api changes/adds 23:09:01 as in, if you are going ot work on snapshots (hint hint) lets get that api sorted out up front 23:09:24 yep that'll work 23:09:49 cool. we can discuss changes to it via the review process 23:09:49 I can push changes to this for the SecGroups extension. 23:09:54 perfect SlickNik 23:10:03 itll be a bit more permanent than the wiki 23:10:11 sounds good. 23:10:28 okey anything else to chat about wrt api? 23:10:40 nope; good by me. 23:10:49 #topic Open Discussion 23:10:53 wait wait 23:11:00 refresh the page :P 23:11:00 you skipped 23:11:12 CI discussion. 23:11:24 wat?!?!? 23:11:27 oyaaaaa 23:11:33 #topic CI discussion 23:11:41 lets chat ci 23:12:26 So turns out that Openstack CI Jenkins is already overloaded with Jenkins jobs. 23:12:51 sounds liek a excellent reason to create a stackforge jenkins :P 23:12:53 And they don't want to have Stackforge projects' Jenkins jobs adding to this load. 23:13:20 precisely what the plan is i think 23:13:23 sweet 23:13:31 sounds like a great idea 23:13:33 if mordred is around.. maybe he can chime in.. 23:13:40 ya im sure itll take some time tho 23:13:42 hey 23:13:47 Hey mordred. 23:13:52 and they will need to split the codebase for config or key them to specific machines? 23:13:53 stackforge ci mordred 23:13:57 it's not quite creating a stackforge jenkins 23:14:01 3 peas in a pod vipul? 23:14:05 because we had one of those, and it was a bit of a nightmare 23:14:07 BUT 23:14:12 drumroll 23:14:15 We were just talking about the CI plan for stackforge projects. 23:14:28 it is about using existing jenkins resources that HP and Rackspace have (hp at first, because I'm there and it's easy) 23:14:37 to respond to and drive things in OpenStack Gerrit 23:14:46 it's gonna be sexy - you're all going to love it 23:14:56 and if you don't, I'll just have you killed 23:15:08 just more VMs? 23:15:13 werent u going to eventually do that anyway mordred? 23:15:14 nope. 23:15:29 oops. two different htings ... 23:15:31 * hub_cap assumes mordred answerd my question 23:15:35 :P 23:15:50 * hub_cap chants "i will live another day!!" 23:15:50 vipul: more vpns doesn't help, becuase it's the jenkins master that's overloaded, because jenkins can't handle openstack's load 23:16:09 hub_cap: kinda - this is going to be a great example of partial third-party testing support 23:16:16 we've had docs on it for a while 23:16:19 but nobody has stepped up 23:16:33 mordred: i was referring ot the kill us statement ;) 23:16:34 I'm hoping if _I_ step up with an example, soeone else will do :) 23:16:38 hub_cap: oh! 23:16:40 that 23:16:42 yes 23:16:44 I mean, really 23:16:56 * hub_cap knew it was too good to be true 23:16:57 don't open the door if you don't know the person outside 23:17:05 spoken like a true texan 23:17:19 i'll open the door with my shotgun :-P 23:17:44 now that's real texan 23:17:49 #agreed 23:18:07 okey so that sounds like a good plan mordred, maybe we can help once u pony up a bit and we see an example 23:18:17 hub_cap: ++ 23:18:54 we'll have to live with trustign that we ran int-tests for a little while longer :) 23:19:00 yup vipul 23:19:06 sux but thats ok 23:19:09 :-/ 23:19:11 yeah 23:19:15 in int-tests we trust 23:19:22 rimshot! 23:19:25 sounds like weve covered ci, open discussion time (this time for real) 23:19:55 #topic open discussion 23:20:24 i can't think of anything else i missed 23:20:24 *crickets* 23:20:25 That was pretty much all I had to cover... 23:20:28 good from me. 23:20:49 yeah i am good. 23:21:01 feels like we didn't do too well on accumiulating action items 23:21:05 we'll see 23:21:10 time to go home 23:21:25 #action Add Security groups extension API to API docs. 23:21:34 #action SlickNik to add Security groups extension API to API docs. 23:21:38 here's one I missed. 23:21:49 #action cp16net go home 23:21:57 heh 23:21:58 i'm going to work on that one right now :-P 23:22:06 see ya cp16net :) 23:22:07 see yall 23:22:09 later cp16net :) 23:22:11 later 23:22:20 ok that's a wrap 23:22:50 #endmeeting