22:00:12 #startmeeting reddwarf 22:00:13 Meeting started Tue Dec 4 22:00:12 2012 UTC. The chair is vipul. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:00:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:00:16 The meeting name has been set to 'reddwarf' 22:00:26 hah look at that horizon took our spot :P 22:00:42 #Info Agenda: http://wiki.openstack.org/Meetings/RedDwarfMeeting 22:00:45 but thats fine by me sinc ethey are closer to core 22:00:47 hola 22:01:10 hey 22:01:15 hi 22:01:21 at least no one will compain if we go over time in this room 22:01:26 exactly 22:01:29 nice 22:01:37 i didnt know there was an alt room 22:01:40 its new 22:01:40 Heh, hopefully we won't have to though. 22:01:42 what happened to the other room? 22:01:49 (go over time I mean) 22:01:50 kicke us out 22:01:50 #info no one watches the mailing list but hub_cap 22:01:51 :P 22:02:15 its new 22:02:16 so did you seem the blueprint on the mailing list, then hub_cap? :P 22:02:22 #topic Action Item Review 22:02:25 haah yours yes SlickNik 22:02:33 the devstack integration one 22:02:38 :D 22:03:10 * cp16net shrugs... 22:03:13 SlickNick dkehn updates on Devstack Integration? 22:03:15 speaking of that SlickNik, hows that going 22:03:16 :P 22:03:28 It's still going pretty well. 22:03:47 dkehn and I got the install/config pieces in 22:03:57 k , working the init_redwarf protion, the configure & build complete 22:04:14 just delaing what is necessary and what is not 22:04:15 We're still hitting some issues with setting up the repo, and building the image. 22:04:28 THen we'll tackle bringing up the guest. 22:04:53 We've got a separate repo at https://github.com/dkehn/devstack 22:05:08 That we're pushing out intermediate fixes to. 22:05:15 So it's still a work in progress. 22:05:23 nice 22:05:24 #link https://github.com/dkehn/devstack 22:05:35 One sec. 22:05:50 any replies on the BP SlickNik? 22:06:01 nope, no hits so far. 22:06:29 cool, then we can do WHATEVER we want :P 22:06:37 I like that 22:06:42 was thinking of running it by mordred and a couple other folks. 22:06:43 SlickNik, can we link to blueprint 22:07:12 #link https://blueprints.launchpad.net/reddwarf/+spec/reddwarf-devstack-integration 22:07:15 I updated mordred this morning, he's prety up to speed with what we are doing 22:07:26 that's what I was trying to get :) 22:07:39 okay, cool. Thanks dkehn. 22:07:44 #action dkehn to discuss offline about keystone users required by Redstack 22:08:02 anything else to add ? 22:08:11 ya iirc some of those tests for different users were to validate user ownership 22:08:24 hub_cap: you are correct 22:08:30 talk to datsun180b and cp16net, they added those users 22:08:35 given that devstack freates its own and CI uses them?? 22:08:35 dkehn: ^ ^ 22:08:40 and to make sure we didn't fall into any ruts about giving all the instances to a single user 22:08:42 will do 22:08:44 to make sure admin could see all instances and other users coudl not see each others 22:09:02 K, next item 22:09:25 updating references.. I updates all the Launchpads, and looked over READMEs and everything seems to be updated to stackforge 22:09:33 we can check that one off 22:09:40 awesome. 22:09:50 I <3 checking stuff off :) 22:10:08 +1 22:10:20 vipul: very nice 22:10:50 anyone remember the context of the next action item? 22:11:08 doesn't have an owner, i'm skipping it 22:11:16 which # are we on? 22:11:21 #4 now 22:11:24 I think I asked for a link and it got published as an action item. 22:11:55 update on image building blueprint.. i havent filed a bug yet 22:12:08 that was the plan, will do so by EOD 22:12:10 #3 was in reference to logging in to the mysql instance, but i dnot think we need to discuss now 22:12:36 #action vipul to file bug to convert image building to tripleo image builder 22:13:02 next item.. hub_cap: oslo upgrade? 22:13:09 vipul just so you know they move the repo under stack forge now 22:13:10 hey be sure to put stock mysql as the default for that vipul, not percona :D 22:13:23 so oslo upgrade, things r going quite well 22:13:40 ive got the services coming back online w/ teh new config, logging, service, wsgi, and paste stuff 22:13:55 but the tests are failing cuz the rest of the code is still using the old config file stuff 22:13:59 nice. 22:14:02 also ive separated the config file into 2 files 22:14:04 paste and config 22:14:12 hub_cap: we're going to build two versions, one wiht percona, one stock 22:14:19 vipul: cool 22:14:26 im intersted in percona too :D 22:14:38 #info tripleo image builder repo is under stackforge now. 22:14:39 so there was a issue w/ us loading variables in our paste defined __init__ functions 22:14:45 curious 2 cmy.cnfs right, one for mysql std adn percona? 22:15:09 s/cmy/my/ 22:15:10 dkehn: well id say 1 my.cnf 22:15:14 standard 22:15:25 if yall want to use percona u can keep a my.cnf around for percona 22:15:33 #link https://github.com/stackforge/diskimage-builder 22:15:48 but we should try to have 1 path for the public code, and make it configurable for yall/us/anyone else 22:15:50 make sense? 22:15:53 keep std in the /etc/directory and specifics in the datadir as extra 22:16:06 Thanks juice_ 22:16:10 hub_cap yes, I think we should support both, one that is the community version mysql 22:16:16 and have a new 'flavor' that supports percona 22:16:29 but yea, need one that works for everyone 22:16:31 sounds like we may need to talk about that offline 22:16:36 cuz i dont think thats the case 22:16:39 we need the one that everyone uses 22:16:44 if yall want percona, u can do it 22:16:48 #action vipul to discuss percona baked into image with hub_cap 22:16:50 but i dont think the public one _needs_ percona 22:17:14 moving on.. 22:17:20 volume_support bug is filed 22:17:24 so back to oslo 22:17:26 oh ok 22:17:28 the flavor is a parameter to disk image builder 22:17:38 do we still have more wrt oslo? 22:17:52 we can pass either "rd-guest" or "rd-guest-percona" and it will build the right image 22:18:01 i could keep going but ill leave it at that vipul if anyone is interested feel free to chat w/ me 22:18:04 the stock redstack script can by default use the rd-guest 22:18:13 we can override that with "rd-guest-percona" 22:18:17 or something like that 22:18:26 juice_ hub_cap let's take it offline in #reddwarf 22:18:26 yup 22:18:36 absolutely 22:18:43 oslo..? 22:18:52 naw its good we can move forward 22:18:55 we are 20 min in 22:18:59 im making good progress 22:19:04 shoudl ahve somethign in a day or 2 22:19:08 but its gonna be a big review 22:19:12 cool 22:19:17 Sounds good. 22:19:18 * hub_cap doesnt like big reviews 22:19:21 #info oslo upgrade close, couple more days 22:19:32 SlickNik, update on users for guest? 22:19:53 me either 22:19:57 things can be missed 22:20:24 There was some back and forth on it with SteveLeon. Still need to close on what's the right thing to do here. 22:20:48 #action SlickNik and steveleon to look into default guest user 22:20:56 os_admin seemed to work fine for him, so not sure if a change is needed. 22:21:01 still need to follow up on it. 22:21:05 be sure to work w/ grapex on it too 22:21:22 Okay, will keep grapex in the loop. 22:21:23 ill meet with you guys later on this 22:21:28 who is not in the room.... geez :) 22:21:37 next item.. 22:21:43 No worries. I'll ping him on #reddwarf when he gets on :) 22:21:49 using "root" didnt work as the guest agent was trying to create user "root" 22:21:50 grapex: real mode tests 22:22:01 grapex is not here right now 22:22:09 Will take this offline with you, steveleon. 22:22:11 esp1: any update on our end? 22:22:28 i know he had a review that was approved yesterday that helped split some of the tests up 22:22:49 I did just attempt ot run them, we're not at 100% yet 22:22:52 vipul: not much 22:23:00 #info real-mode tests still not completely working 22:23:22 tried to run the tests again today but I think we need to give grapex more time to clean up tests on reddwarf-int 22:23:29 #action esp1 grapex to continue working on fixing all real-mode tests 22:23:42 speaking of the grape devil 22:23:52 here he is. 22:23:53 grapex: real mode test updates? quick! 22:24:28 hub_cap: Real mode tests need work, in particular gating code to avoid running the mgmt API until its ready. 22:24:54 The MGMT api won't work without the Nova extensions. We had code in place before to avoid hitting it if it wasn't enabled, we just need it again. 22:24:54 grapex: care to explain in a sentence or 2 the mgmt api issue so the HP guys know whats goin on 22:24:57 hahah 22:24:59 Sure 22:25:00 yeah I wonder if we can ignore them for now if they really aren't blocking 22:25:09 ya i think thats what has to happen esp1 22:25:34 The mgmt api uses some Nova extensions to grab some information not present in the API. For instance, it grabs the local ID, and the compute host of instances. 22:25:47 hub_cap is working on moving these extensions to the public. 22:25:55 wha!?!?!? i am?!??!??! 22:25:58 lol 22:26:03 heh 22:26:04 hmm wasnt on my radar 22:26:19 #action hub_cap to work on moving the extensions for mgmt api to public 22:26:26 grapex: these are things not available thorugh nova api? 22:26:33 vipul: right 22:26:41 so basically 22:26:45 we coded some extensions 22:26:51 and we need to make a separate repo for them 22:27:01 and anyone who needs to use the mgmt api needs those extensions 22:27:02 vipul: Yeah. Its all trivial stuff. 22:27:27 or we can try to get them _in_ to nova extensions, which is the best route imho 22:27:34 imnsho 22:27:37 Yea, We should separate the tests somehow, so that we can run things that require extensions separately 22:27:43 @grapex, hub_cap: Will these extensions need to be integrated into devstack as well? 22:27:55 hub_cap, yes that's hte only way we'd be able to use them, if they are in tip nova 22:27:56 if we test the mgmt api, they need to be integrated 22:28:04 vipul: Yes, it was like that at one point. :'( It shouldn't take too long to keep them from running again. 22:28:18 vipul: figured that'd be the case for yall :D 22:28:22 ok so OVZ 22:28:32 go for it 22:28:39 im still trying to tackle internally freeing up our resource to devote to ovz 22:28:54 had a conversation w/ my mgr today and i think i was able to convince to have him start owrking on it again 22:29:02 the real blocker is just one 22:29:24 the way ovz does volume stuffs is drastically different from the rest of nova 22:29:47 nova went and removed some of the mouting/formatting within a guest instance, and we still have that _in_ ovz 22:29:59 so we need to remove that so it aligns w/ the other hypervisor impls 22:30:08 #info real mode tests for management api require nova extensions 22:30:44 ill have another update next wk when i know more 22:30:59 #info OVZ impl requires some refactoring around volumes 22:31:08 #info Rax may dedicate resource to OVZ 22:31:27 #action hub_cap to provide another update on OVZ 22:31:36 _will_ dedicate vipul ;) 22:31:39 #info devstack integration will need to support nova extensions if we want to test management API in tempest. 22:31:41 lol 22:31:45 if nothign else im gonna be doing it 22:31:51 hub_cap, let us know if we can assist 22:31:56 awesome! 22:32:09 ok that wraps up actioin items 22:32:16 anything we missed? 22:32:17 hah only 30min in 22:32:23 not too shabby 22:32:26 lol 22:32:28 go go go 22:32:33 #topic Volume Updates 22:32:49 #info vipul still working on re-enabling volume support 22:32:50 vipul: i tihnk someone said u were working on that 22:32:54 :D 22:32:57 don't have much of an update at this point 22:33:04 cool no biggie 22:33:11 too many competing things taking my time 22:33:24 vipul: thats why i resigned as a dev mgr 22:33:24 always the case... 22:33:32 hah 22:33:43 :P 22:33:51 i dont blame you 22:33:58 yea gotta get better at managing time for sure 22:34:03 moving on.. 22:34:11 #topic CI Updates / Image updates 22:34:14 hub_cap, stop giving vipul ideas :p 22:34:31 HAHA steveleon 22:34:44 so ive passed the buck so to speak on image stuffs for now 22:35:16 juice_ updates on Image? 22:35:36 juice and kagan were hard at work on it, last time I saw them. 22:36:00 juice_ is providing printing support :) 22:36:03 yes building the images and test booting now 22:36:18 having difficulty getting it to come up 22:36:35 so rolling back configs until can figure out the exact problem 22:36:40 sweet 22:36:50 but going good using the stack forge image builder 22:37:01 #info using the tripleo disk image builder, plan is to replace ubuntu-vm-builder 22:37:12 talked to some of those guys this morning and there are few things still in flux on their side (image builder team) 22:37:48 k cool 22:37:59 #info image buildling work still in progress 22:38:05 CI? 22:38:49 SlickNik dkehn, anything else to add from earlier update? 22:39:03 nope 22:39:04 Spoke with clarkb on what needs to be done to enable CI in tempest. 22:39:30 dkehn and I that is. 22:39:33 #info Redstack integration into Devstack ongoing 22:39:33 but will need to figure out the devstack integration piece first. 22:40:01 #info Tempest integration not started, conversations have begun 22:40:22 grapex, hub_cap: anything on your end? 22:40:52 not from me, grapex maybe, i know he is trying to get the tests shored up as much as possible in his time 22:41:31 vipul: Nothing new to report. Hopefully I'll have test changes to avoid use of the mgmt api in soon. 22:41:39 k moving on.. 22:41:54 #topic os_amin / root user 22:42:02 don't know if we have any movement on this 22:42:18 ok then we can skip it 22:42:19 what is the essence of this/. 22:42:25 #info SlickNik steveleon grapex to figure this out 22:42:27 or the issue? 22:42:33 related to earlier action item update 22:42:40 yall brought up the need to use a _different_ user than os_admin 22:42:45 thats the user we use for admin'ing the box 22:42:54 err the mysql server 22:42:57 we may want to stick with that for now 22:42:58 making it a configuration or something 22:43:04 right 22:43:05 until we get everything up and running 22:43:14 vipul: Sounds good. 22:43:18 ok slicknik, maybe you can tell me offline why using os_admin doesnt work 22:43:28 #info possibly stick to os_admin as admin mysql user, until we have a use case 22:43:52 sure, let's chat about it offline at #reddwarf after the meeting. 22:43:58 #topic Dealing with Redstack 22:44:15 anything new to add here that hasnt been covered already? 22:44:31 don't think so, main thing remainign here is real mode tests 22:44:39 oh i have 1 thing ill need to update /w oslo 22:44:43 dealing with it? meaning integrating with devstack? 22:44:49 no dealing with it :P 22:45:01 oh i deal with it everyday 22:45:06 haha 22:45:07 so done. :-P 22:45:16 i think we still need to fully understand everything that goes on in those scripts 22:45:20 since the paste config is a new file ill have to udpate that in redstack, watch for it as a separate commit 22:45:24 and we'll keep raising questions 22:45:27 yup vipul there is _too_ much there 22:45:30 :D 22:45:31 yup 22:46:08 #info we're dealing with it 22:46:09 i've been studying the redstack script while trying to setup reddwarf standalone... 22:46:14 nice :) 22:46:22 maybe i can be of any help 22:46:31 steveleon: yeesh thats a task 22:47:17 lol vipul nice info, just noticed that 22:47:21 ok so we move on to features? 22:47:24 anyone have anything else to add? 22:47:30 i have some experience with it so if questions arise i can help where needed 22:47:34 but its just bash 22:47:35 i got dev stack and reddwarf running on two different cloud instances. i just need an image to start creating dbs 22:47:39 nope, good by me. 22:47:40 so that should not be a big deal 22:47:55 bash what, cp16net? :P 22:47:56 steveleon: sweet 22:48:01 yep, I think the challenge is to figure out what things are absolutely required vs not 22:48:05 bash = break stuff 22:48:06 :-P 22:48:13 lol 22:48:24 #topic feature discussion 22:48:30 yeah there is alot of fluff that was added to redstack 22:48:45 soooooo, features 22:48:58 one question I had.. is there support for user/tenant level quotas? 22:49:18 I think i saw a global quota, but not user level, maybe soemthing we'll need 22:49:18 ya there is 2 configs 22:49:22 oh ya 22:49:25 not separately 22:49:26 its global, sry 22:49:29 But that's for everyone, and not for individuals. 22:49:30 yeah 22:49:32 we want it as well 22:49:42 we mgiht want to look into turnstyle 22:49:45 #info no user-level quotas, feature #1 add user-level quotas and update management APIs 22:50:00 hub_cap, I thought that was more for rate limiting? 22:50:03 could be wrong 22:50:16 https://github.com/klmitch/turnstile 22:50:21 #link https://github.com/klmitch/turnstile 22:50:36 need to setup redis...argh 22:50:37 i thikn it might just be rate limiting now 22:50:40 good call vipul 22:50:46 what's turnstyle? 22:50:52 we were talking about integrating repose in teh future as well vipul 22:51:05 so maybe there isint a need for it _in_ our application 22:51:07 thx, will look. 22:51:22 #info look at Turnstyle, repose as possible integration points 22:51:25 #action chat up w/ the openstack people about whether all projects should maintain their own quotas 22:51:37 not sure why this is not a 'common' thing yet 22:51:42 i honestly dont know what the right answer is 22:51:50 vipul: welcome to openstack :P 22:51:55 mabye we can make it a common thing 22:52:12 yea would make most sense there 22:52:20 k, other features.. 22:52:23 yeah someone just needs to say that its common and then it becomes common 22:52:24 :) 22:52:33 so yall had snapshots defined, was there any plan on moving that in? 22:52:39 cp16net: no joke 22:52:52 i think a general roadmap, going well in to next year should include 22:53:00 Yes, that is still part of the plan.. just not sure when we'll get to it 22:53:12 1) snapshots, mycnf edits, rate limiting/quotas, and some sembalance of migrations 22:53:19 oops i forgot to number them 22:53:20 ahaha 22:53:29 thats all #1 its a big ole feature 22:53:33 All those are 1 feature?!? 22:53:38 heh 22:53:39 hahaha YES SlickNik 22:53:50 what's wrong with migrations..? 22:54:15 we need to do _extra_ stuff to make sure they work w/ our environment 22:54:23 and to make sure our guest comes back online, etc.. 22:54:32 blindly calling nova migrate is a bit scary :D 22:54:32 We're getting there, though 22:54:49 #info replication is also part of our roadmap 22:54:55 most of the work is _def_ nova tho vipul 22:55:16 hub_cap.. oh vm migrations.. nvm 22:55:18 id lke to see some code that validates it on our end tho (i mean, there is a step that _has_ to validate it) 22:55:21 i'm slow 22:55:27 vipul: what migration were u talking about? 22:55:30 db 22:55:31 lol 22:55:39 sqlalchemy 22:55:39 lol 22:55:47 nice :P 22:56:14 honestly we haven't actually even started looking at that 22:56:26 yes we want to work on replication but thats a big scary task and its on our radar further in the year 22:56:41 so if u want to team up on it we might be able to devote some resources to research earlier 22:57:03 lets make sure we all knwo what features we are working on so we can work on them as a large group, cuz then we all get to do less work :D 22:57:04 Was the previous list in order of priority? 22:57:08 there will be a lot to replication, cloning, fail-over, master-master, et.c 22:57:09 yep, that sounds good, i think we need to have a more formal planning meeting 22:57:16 SlickNik: umm not 100% ordered but mostly 22:57:39 dkehn: i don't think we have to address everything right away, failover doesn't have to be automated for example 22:57:39 vipul: ya i thnk so too, maybe we do a google hangout for that and have a nice conversation about it 22:58:05 dkehn: i agree, replication scares the bejesus out of me due to how hard it is 22:58:09 vipul, ouch manual failover with point time recovery.... 22:58:16 #action vipul and hub_cap to set up formal roadmap/feature meeting 22:58:22 but people want the replication 22:58:25 dkehn: baby steps :) 22:58:26 hub_cap, actually its not that bad 22:58:31 hell i was telling my mgr today that turning replication on is easy 22:58:44 are backups on the road map or is that part of migration? 22:58:52 esp1: def 22:58:55 its is easy, its what you do when it goes to s@!t 22:58:56 separate 22:59:00 yup 22:59:00 cool 22:59:04 and its nicer w/ 5.5 too 22:59:18 semi-sync is very nice 22:59:18 iirc the agent is no longer single threaded 22:59:25 only one commit lag 22:59:49 im glad u feel beter about it than i dkehn :D 23:00:01 last thing i need is the engineers scrambling all the time when it fails!! 23:00:17 i mean honestly wrt features 23:00:21 thats why it need to be automated 23:00:30 we need to do all the big mysql features right? its really about the order and whos working on what 23:00:34 dkehn: yup :D 23:00:42 but "automated" is such a fuzzy word 23:00:45 dkehn hub_cap we need to have a bigger discussion on this 23:00:51 vipul: absolutely 23:01:03 we could go on for hrs just on replication :D 23:01:11 yup 23:01:12 also, we might want to condiser some sort of scheduler 23:01:26 not sure if its going overboard... 23:01:28 but my thought is this 23:01:51 we need ot have time windows to do things like backups, snapshots, etc... as to not overwhelm the entire environment 23:02:16 so there needs to be som arbiter of that logic somewhere 23:02:20 agreed 23:02:32 although scheduler types of features may be a bit further out 23:02:37 do we have any requirements now? 23:02:44 we can start it _easy_ but my guess will be tha the scheduler will be pretty generic in openstack so we can start using it 23:02:45 no not now 23:02:47 def future 23:02:53 wonder what the cool kids are using for a scheduler in python these days. 23:03:01 esp1: roll your own :D 23:03:03 cron? 23:03:07 the #openstack way 23:03:08 lol 23:03:17 lol 23:03:35 i dont think its cron 23:03:38 cron is fuzzy too 23:03:46 i dont want to rely on external systems either 23:03:53 its a python daemon 23:03:57 ive seen NASTY cron triggered hacky systems 23:04:01 gotcha :) 23:04:15 https://github.com/rackspace/Tempo 23:04:24 but ya there was talk aabout making a generic scheduler 23:04:37 vipul: now that makes sense 23:04:38 anyone know if that's actually a real active thing? 23:04:39 for a end user 23:04:48 monitoring and managing cron jobs is a whole can of worms in itself. 23:04:53 SlickNik: ya dude 23:05:02 vipul: im not privy to the upper echelon of things @rax 23:05:05 i havent even heard of that 23:05:18 but its >8mo old 23:05:25 k, we'll forget we ever saw it ;) 23:05:30 ya 23:05:42 k running over time 23:05:46 but srsly the openstack guys were tlaking about the need for a generic scheduler framework 23:05:55 since cinder, nova, and others will be using 23:05:58 sorry to take things of course... 23:06:00 would be nice 23:06:03 so once we need to, we will ride that wave 23:06:08 any timeframe? 23:06:12 nope 23:06:15 figures 23:06:18 yup 23:06:24 #topic Open Discussion 23:06:29 We don't need it right away; perhaps they will have something by the time we need it? 23:06:36 anything else we need to discuss? 23:06:44 #info prereq for migrations - grouping of instances 23:06:51 ^ ^ we need to start thinking about that in general 23:07:15 before we go dive into things that require > 1 server to be grouped together 23:07:15 is the host_id not reliable? 23:07:29 how do we determine Master X, Slave Y 23:07:29 meaning can't we lookup host_id of each instance to group things? 23:07:35 we need a mapping of sorts 23:07:41 i see 23:07:45 and id like to make it generic enough to support other things like a hadoop setup 23:08:07 ok tahts my random open discussion 23:08:11 well, we need some sort of collection of host by id, I guess. 23:08:16 #info consider tagging instances 23:08:23 Has anyone run into issues running 12.04 w/ redstack? I have pending task on our end that I'd like to close. 23:08:46 i haven't tried yet on a fresh vm 23:08:57 hub_cap you were making some mods 23:09:06 I think the last one that hub_cap hit was upping mysql to 5.5...IIRC... 23:09:08 12.04 good to go? 23:09:16 I've tried on VMWare Fusion and our HPCloud instance... 23:09:28 i have a fresh vm 23:09:31 and everythign is good now 23:09:40 cool, thx! 23:09:42 esp1: do u have the issuez? 23:09:45 if so chat w/ me 23:09:55 iirc i fired it up and ran it 23:09:59 #info 12.04 support completed 23:10:27 hup_cap: I think it's just a lingering task that we kept around from last sprint. But I think we got it resolved. 23:10:44 ddemir is working on getting some unit tests into python-reddwarfclient 23:11:01 esp1: sweet 23:11:26 anyone got anything else? 23:11:38 good by me. 23:11:59 k, thanks for attending folks 23:12:07 good night all 23:12:10 thx! 23:12:24 dkehn: drink a few margaritas for us 23:12:31 in mexico 23:12:33 I am so there 23:12:59 heh, have fun dkehn. :) 23:13:03 thanks all, see you around in #reddwarf! 23:13:05 nice :P 23:13:13 woah 23:13:19 we dont have a ton of action items!!! 23:13:38 did we miss any? 23:13:51 if not ending meeting 23:13:51 nope 23:13:56 im just surprised :P 23:14:03 #endmeeting