22:00:12 <vipul> #startmeeting reddwarf 22:00:13 <openstack> Meeting started Tue Dec 4 22:00:12 2012 UTC. The chair is vipul. Information about MeetBot at http://wiki.debian.org/MeetBot. 22:00:14 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 22:00:16 <openstack> The meeting name has been set to 'reddwarf' 22:00:26 <hub_cap> hah look at that horizon took our spot :P 22:00:42 <vipul> #Info Agenda: http://wiki.openstack.org/Meetings/RedDwarfMeeting 22:00:45 <hub_cap> but thats fine by me sinc ethey are closer to core 22:00:47 <cp16net> hola 22:01:10 <SlickNik> hey 22:01:15 <datsun180b> hi 22:01:21 <vipul> at least no one will compain if we go over time in this room 22:01:26 <hub_cap> exactly 22:01:29 <cp16net> nice 22:01:37 <cp16net> i didnt know there was an alt room 22:01:40 <hub_cap> its new 22:01:40 <SlickNik> Heh, hopefully we won't have to though. 22:01:42 <steveleon> what happened to the other room? 22:01:49 <SlickNik> (go over time I mean) 22:01:50 <vipul> kicke us out 22:01:50 <hub_cap> #info no one watches the mailing list but hub_cap 22:01:51 <hub_cap> :P 22:02:15 <hub_cap> its new 22:02:16 <SlickNik> so did you seem the blueprint on the mailing list, then hub_cap? :P 22:02:22 <vipul> #topic Action Item Review 22:02:25 <hub_cap> haah yours yes SlickNik 22:02:33 <hub_cap> the devstack integration one 22:02:38 <hub_cap> :D 22:03:10 * cp16net shrugs... 22:03:13 <vipul> SlickNick dkehn updates on Devstack Integration? 22:03:15 <hub_cap> speaking of that SlickNik, hows that going 22:03:16 <hub_cap> :P 22:03:28 <SlickNik> It's still going pretty well. 22:03:47 <SlickNik> dkehn and I got the install/config pieces in 22:03:57 <dkehn> k , working the init_redwarf protion, the configure & build complete 22:04:14 <dkehn> just delaing what is necessary and what is not 22:04:15 <SlickNik> We're still hitting some issues with setting up the repo, and building the image. 22:04:28 <SlickNik> THen we'll tackle bringing up the guest. 22:04:53 <SlickNik> We've got a separate repo at https://github.com/dkehn/devstack 22:05:08 <SlickNik> That we're pushing out intermediate fixes to. 22:05:15 <SlickNik> So it's still a work in progress. 22:05:23 <hub_cap> nice 22:05:24 <vipul> #link https://github.com/dkehn/devstack 22:05:35 <SlickNik> One sec. 22:05:50 <hub_cap> any replies on the BP SlickNik? 22:06:01 <SlickNik> nope, no hits so far. 22:06:29 <hub_cap> cool, then we can do WHATEVER we want :P 22:06:37 <dkehn> I like that 22:06:42 <SlickNik> was thinking of running it by mordred and a couple other folks. 22:06:43 <vipul> SlickNik, can we link to blueprint 22:07:12 <SlickNik> #link https://blueprints.launchpad.net/reddwarf/+spec/reddwarf-devstack-integration 22:07:15 <dkehn> I updated mordred this morning, he's prety up to speed with what we are doing 22:07:26 <SlickNik> that's what I was trying to get :) 22:07:39 <SlickNik> okay, cool. Thanks dkehn. 22:07:44 <vipul> #action dkehn to discuss offline about keystone users required by Redstack 22:08:02 <vipul> anything else to add ? 22:08:11 <hub_cap> ya iirc some of those tests for different users were to validate user ownership 22:08:24 <cp16net> hub_cap: you are correct 22:08:30 <hub_cap> talk to datsun180b and cp16net, they added those users 22:08:35 <dkehn> given that devstack freates its own and CI uses them?? 22:08:35 <hub_cap> dkehn: ^ ^ 22:08:40 <datsun180b> and to make sure we didn't fall into any ruts about giving all the instances to a single user 22:08:42 <dkehn> will do 22:08:44 <cp16net> to make sure admin could see all instances and other users coudl not see each others 22:09:02 <vipul> K, next item 22:09:25 <vipul> updating references.. I updates all the Launchpads, and looked over READMEs and everything seems to be updated to stackforge 22:09:33 <vipul> we can check that one off 22:09:40 <SlickNik> awesome. 22:09:50 <SlickNik> I <3 checking stuff off :) 22:10:08 <cp16net> +1 22:10:20 <hub_cap> vipul: very nice 22:10:50 <vipul> anyone remember the context of the next action item? 22:11:08 <vipul> doesn't have an owner, i'm skipping it 22:11:16 <cp16net> which # are we on? 22:11:21 <vipul> #4 now 22:11:24 <SlickNik> I think I asked for a link and it got published as an action item. 22:11:55 <vipul> update on image building blueprint.. i havent filed a bug yet 22:12:08 <vipul> that was the plan, will do so by EOD 22:12:10 <hub_cap> #3 was in reference to logging in to the mysql instance, but i dnot think we need to discuss now 22:12:36 <vipul> #action vipul to file bug to convert image building to tripleo image builder 22:13:02 <vipul> next item.. hub_cap: oslo upgrade? 22:13:09 <juice_> vipul just so you know they move the repo under stack forge now 22:13:10 <hub_cap> hey be sure to put stock mysql as the default for that vipul, not percona :D 22:13:23 <hub_cap> so oslo upgrade, things r going quite well 22:13:40 <hub_cap> ive got the services coming back online w/ teh new config, logging, service, wsgi, and paste stuff 22:13:55 <hub_cap> but the tests are failing cuz the rest of the code is still using the old config file stuff 22:13:59 <SlickNik> nice. 22:14:02 <hub_cap> also ive separated the config file into 2 files 22:14:04 <hub_cap> paste and config 22:14:12 <vipul> hub_cap: we're going to build two versions, one wiht percona, one stock 22:14:19 <hub_cap> vipul: cool 22:14:26 <hub_cap> im intersted in percona too :D 22:14:38 <SlickNik> #info tripleo image builder repo is under stackforge now. 22:14:39 <hub_cap> so there was a issue w/ us loading variables in our paste defined __init__ functions 22:14:45 <dkehn> curious 2 cmy.cnfs right, one for mysql std adn percona? 22:15:09 <dkehn> s/cmy/my/ 22:15:10 <hub_cap> dkehn: well id say 1 my.cnf 22:15:14 <hub_cap> standard 22:15:25 <hub_cap> if yall want to use percona u can keep a my.cnf around for percona 22:15:33 <SlickNik> #link https://github.com/stackforge/diskimage-builder 22:15:48 <hub_cap> but we should try to have 1 path for the public code, and make it configurable for yall/us/anyone else 22:15:50 <hub_cap> make sense? 22:15:53 <dkehn> keep std in the /etc/directory and specifics in the datadir as extra 22:16:06 <SlickNik> Thanks juice_ 22:16:10 <vipul> hub_cap yes, I think we should support both, one that is the community version mysql 22:16:16 <vipul> and have a new 'flavor' that supports percona 22:16:29 <vipul> but yea, need one that works for everyone 22:16:31 <hub_cap> sounds like we may need to talk about that offline 22:16:36 <hub_cap> cuz i dont think thats the case 22:16:39 <hub_cap> we need the one that everyone uses 22:16:44 <hub_cap> if yall want percona, u can do it 22:16:48 <vipul> #action vipul to discuss percona baked into image with hub_cap 22:16:50 <hub_cap> but i dont think the public one _needs_ percona 22:17:14 <vipul> moving on.. 22:17:20 <vipul> volume_support bug is filed 22:17:24 <hub_cap> so back to oslo 22:17:26 <hub_cap> oh ok 22:17:28 <juice_> the flavor is a parameter to disk image builder 22:17:38 <vipul> do we still have more wrt oslo? 22:17:52 <juice_> we can pass either "rd-guest" or "rd-guest-percona" and it will build the right image 22:18:01 <hub_cap> i could keep going but ill leave it at that vipul if anyone is interested feel free to chat w/ me 22:18:04 <juice_> the stock redstack script can by default use the rd-guest 22:18:13 <juice_> we can override that with "rd-guest-percona" 22:18:17 <juice_> or something like that 22:18:26 <vipul> juice_ hub_cap let's take it offline in #reddwarf 22:18:26 <hub_cap> yup 22:18:36 <hub_cap> absolutely 22:18:43 <vipul> oslo..? 22:18:52 <hub_cap> naw its good we can move forward 22:18:55 <hub_cap> we are 20 min in 22:18:59 <hub_cap> im making good progress 22:19:04 <hub_cap> shoudl ahve somethign in a day or 2 22:19:08 <hub_cap> but its gonna be a big review 22:19:12 <vipul> cool 22:19:17 <SlickNik> Sounds good. 22:19:18 * hub_cap doesnt like big reviews 22:19:21 <vipul> #info oslo upgrade close, couple more days 22:19:32 <vipul> SlickNik, update on users for guest? 22:19:53 <cp16net> me either 22:19:57 <cp16net> things can be missed 22:20:24 <SlickNik> There was some back and forth on it with SteveLeon. Still need to close on what's the right thing to do here. 22:20:48 <vipul> #action SlickNik and steveleon to look into default guest user 22:20:56 <SlickNik> os_admin seemed to work fine for him, so not sure if a change is needed. 22:21:01 <SlickNik> still need to follow up on it. 22:21:05 <hub_cap> be sure to work w/ grapex on it too 22:21:22 <SlickNik> Okay, will keep grapex in the loop. 22:21:23 <steveleon> ill meet with you guys later on this 22:21:28 <hub_cap> who is not in the room.... geez :) 22:21:37 <vipul> next item.. 22:21:43 <SlickNik> No worries. I'll ping him on #reddwarf when he gets on :) 22:21:49 <steveleon> using "root" didnt work as the guest agent was trying to create user "root" 22:21:50 <vipul> grapex: real mode tests 22:22:01 <hub_cap> grapex is not here right now 22:22:09 <SlickNik> Will take this offline with you, steveleon. 22:22:11 <vipul> esp1: any update on our end? 22:22:28 <hub_cap> i know he had a review that was approved yesterday that helped split some of the tests up 22:22:49 <vipul> I did just attempt ot run them, we're not at 100% yet 22:22:52 <esp1> vipul: not much 22:23:00 <vipul> #info real-mode tests still not completely working 22:23:22 <esp1> tried to run the tests again today but I think we need to give grapex more time to clean up tests on reddwarf-int 22:23:29 <vipul> #action esp1 grapex to continue working on fixing all real-mode tests 22:23:42 <hub_cap> speaking of the grape devil 22:23:52 <SlickNik> here he is. 22:23:53 <hub_cap> grapex: real mode test updates? quick! 22:24:28 <grapex> hub_cap: Real mode tests need work, in particular gating code to avoid running the mgmt API until its ready. 22:24:54 <grapex> The MGMT api won't work without the Nova extensions. We had code in place before to avoid hitting it if it wasn't enabled, we just need it again. 22:24:54 <hub_cap> grapex: care to explain in a sentence or 2 the mgmt api issue so the HP guys know whats goin on 22:24:57 <hub_cap> hahah 22:24:59 <grapex> Sure 22:25:00 <esp1> yeah I wonder if we can ignore them for now if they really aren't blocking 22:25:09 <hub_cap> ya i think thats what has to happen esp1 22:25:34 <grapex> The mgmt api uses some Nova extensions to grab some information not present in the API. For instance, it grabs the local ID, and the compute host of instances. 22:25:47 <grapex> hub_cap is working on moving these extensions to the public. 22:25:55 <hub_cap> wha!?!?!? i am?!??!??! 22:25:58 <cp16net> lol 22:26:03 <SlickNik> heh 22:26:04 <hub_cap> hmm wasnt on my radar 22:26:19 <hub_cap> #action hub_cap to work on moving the extensions for mgmt api to public 22:26:26 <vipul> grapex: these are things not available thorugh nova api? 22:26:33 <cp16net> vipul: right 22:26:41 <hub_cap> so basically 22:26:45 <hub_cap> we coded some extensions 22:26:51 <hub_cap> and we need to make a separate repo for them 22:27:01 <hub_cap> and anyone who needs to use the mgmt api needs those extensions 22:27:02 <grapex> vipul: Yeah. Its all trivial stuff. 22:27:27 <hub_cap> or we can try to get them _in_ to nova extensions, which is the best route imho 22:27:34 <hub_cap> imnsho 22:27:37 <vipul> Yea, We should separate the tests somehow, so that we can run things that require extensions separately 22:27:43 <SlickNik> @grapex, hub_cap: Will these extensions need to be integrated into devstack as well? 22:27:55 <vipul> hub_cap, yes that's hte only way we'd be able to use them, if they are in tip nova 22:27:56 <hub_cap> if we test the mgmt api, they need to be integrated 22:28:04 <grapex> vipul: Yes, it was like that at one point. :'( It shouldn't take too long to keep them from running again. 22:28:18 <hub_cap> vipul: figured that'd be the case for yall :D 22:28:22 <hub_cap> ok so OVZ 22:28:32 <vipul> go for it 22:28:39 <hub_cap> im still trying to tackle internally freeing up our resource to devote to ovz 22:28:54 <hub_cap> had a conversation w/ my mgr today and i think i was able to convince to have him start owrking on it again 22:29:02 <hub_cap> the real blocker is just one 22:29:24 <hub_cap> the way ovz does volume stuffs is drastically different from the rest of nova 22:29:47 <hub_cap> nova went and removed some of the mouting/formatting within a guest instance, and we still have that _in_ ovz 22:29:59 <hub_cap> so we need to remove that so it aligns w/ the other hypervisor impls 22:30:08 <vipul> #info real mode tests for management api require nova extensions 22:30:44 <hub_cap> ill have another update next wk when i know more 22:30:59 <vipul> #info OVZ impl requires some refactoring around volumes 22:31:08 <vipul> #info Rax may dedicate resource to OVZ 22:31:27 <vipul> #action hub_cap to provide another update on OVZ 22:31:36 <hub_cap> _will_ dedicate vipul ;) 22:31:39 <SlickNik> #info devstack integration will need to support nova extensions if we want to test management API in tempest. 22:31:41 <SlickNik> lol 22:31:45 <hub_cap> if nothign else im gonna be doing it 22:31:51 <vipul> hub_cap, let us know if we can assist 22:31:56 <hub_cap> awesome! 22:32:09 <vipul> ok that wraps up actioin items 22:32:16 <vipul> anything we missed? 22:32:17 <hub_cap> hah only 30min in 22:32:23 <cp16net> not too shabby 22:32:26 <vipul> lol 22:32:28 <SlickNik> go go go 22:32:33 <vipul> #topic Volume Updates 22:32:49 <vipul> #info vipul still working on re-enabling volume support 22:32:50 <hub_cap> vipul: i tihnk someone said u were working on that 22:32:54 <hub_cap> :D 22:32:57 <vipul> don't have much of an update at this point 22:33:04 <hub_cap> cool no biggie 22:33:11 <vipul> too many competing things taking my time 22:33:24 <hub_cap> vipul: thats why i resigned as a dev mgr 22:33:24 <cp16net> always the case... 22:33:32 <cp16net> hah 22:33:43 <hub_cap> :P 22:33:51 <cp16net> i dont blame you 22:33:58 <vipul> yea gotta get better at managing time for sure 22:34:03 <vipul> moving on.. 22:34:11 <vipul> #topic CI Updates / Image updates 22:34:14 <steveleon> hub_cap, stop giving vipul ideas :p 22:34:31 <hub_cap> HAHA steveleon 22:34:44 <hub_cap> so ive passed the buck so to speak on image stuffs for now 22:35:16 <vipul> juice_ updates on Image? 22:35:36 <SlickNik> juice and kagan were hard at work on it, last time I saw them. 22:36:00 <esp1> juice_ is providing printing support :) 22:36:03 <juice_> yes building the images and test booting now 22:36:18 <juice_> having difficulty getting it to come up 22:36:35 <juice_> so rolling back configs until can figure out the exact problem 22:36:40 <hub_cap> sweet 22:36:50 <juice_> but going good using the stack forge image builder 22:37:01 <vipul> #info using the tripleo disk image builder, plan is to replace ubuntu-vm-builder 22:37:12 <juice_> talked to some of those guys this morning and there are few things still in flux on their side (image builder team) 22:37:48 <vipul> k cool 22:37:59 <vipul> #info image buildling work still in progress 22:38:05 <vipul> CI? 22:38:49 <vipul> SlickNik dkehn, anything else to add from earlier update? 22:39:03 <dkehn> nope 22:39:04 <SlickNik> Spoke with clarkb on what needs to be done to enable CI in tempest. 22:39:30 <SlickNik> dkehn and I that is. 22:39:33 <vipul> #info Redstack integration into Devstack ongoing 22:39:33 <SlickNik> but will need to figure out the devstack integration piece first. 22:40:01 <vipul> #info Tempest integration not started, conversations have begun 22:40:22 <vipul> grapex, hub_cap: anything on your end? 22:40:52 <hub_cap> not from me, grapex maybe, i know he is trying to get the tests shored up as much as possible in his time 22:41:31 <grapex> vipul: Nothing new to report. Hopefully I'll have test changes to avoid use of the mgmt api in soon. 22:41:39 <vipul> k moving on.. 22:41:54 <vipul> #topic os_amin / root user 22:42:02 <vipul> don't know if we have any movement on this 22:42:18 <hub_cap> ok then we can skip it 22:42:19 <steveleon> what is the essence of this/. 22:42:25 <vipul> #info SlickNik steveleon grapex to figure this out 22:42:27 <steveleon> or the issue? 22:42:33 <vipul> related to earlier action item update 22:42:40 <hub_cap> yall brought up the need to use a _different_ user than os_admin 22:42:45 <hub_cap> thats the user we use for admin'ing the box 22:42:54 <hub_cap> err the mysql server 22:42:57 <vipul> we may want to stick with that for now 22:42:58 <cp16net> making it a configuration or something 22:43:04 <hub_cap> right 22:43:05 <vipul> until we get everything up and running 22:43:14 <grapex> vipul: Sounds good. 22:43:18 <steveleon> ok slicknik, maybe you can tell me offline why using os_admin doesnt work 22:43:28 <vipul> #info possibly stick to os_admin as admin mysql user, until we have a use case 22:43:52 <SlickNik> sure, let's chat about it offline at #reddwarf after the meeting. 22:43:58 <vipul> #topic Dealing with Redstack 22:44:15 <hub_cap> anything new to add here that hasnt been covered already? 22:44:31 <vipul> don't think so, main thing remainign here is real mode tests 22:44:39 <hub_cap> oh i have 1 thing ill need to update /w oslo 22:44:43 <cp16net> dealing with it? meaning integrating with devstack? 22:44:49 <hub_cap> no dealing with it :P 22:45:01 <cp16net> oh i deal with it everyday 22:45:06 <hub_cap> haha 22:45:07 <cp16net> so done. :-P 22:45:16 <vipul> i think we still need to fully understand everything that goes on in those scripts 22:45:20 <hub_cap> since the paste config is a new file ill have to udpate that in redstack, watch for it as a separate commit 22:45:24 <vipul> and we'll keep raising questions 22:45:27 <hub_cap> yup vipul there is _too_ much there 22:45:30 <hub_cap> :D 22:45:31 <cp16net> yup 22:46:08 <vipul> #info we're dealing with it 22:46:09 <steveleon> i've been studying the redstack script while trying to setup reddwarf standalone... 22:46:14 <cp16net> nice :) 22:46:22 <steveleon> maybe i can be of any help 22:46:31 <hub_cap> steveleon: yeesh thats a task 22:47:17 <hub_cap> lol vipul nice info, just noticed that 22:47:21 <hub_cap> ok so we move on to features? 22:47:24 <vipul> anyone have anything else to add? 22:47:30 <cp16net> i have some experience with it so if questions arise i can help where needed 22:47:34 <cp16net> but its just bash 22:47:35 <steveleon> i got dev stack and reddwarf running on two different cloud instances. i just need an image to start creating dbs 22:47:39 <SlickNik> nope, good by me. 22:47:40 <cp16net> so that should not be a big deal 22:47:55 <SlickNik> bash what, cp16net? :P 22:47:56 <hub_cap> steveleon: sweet 22:48:01 <vipul> yep, I think the challenge is to figure out what things are absolutely required vs not 22:48:05 <cp16net> bash = break stuff 22:48:06 <cp16net> :-P 22:48:13 <hub_cap> lol 22:48:24 <vipul> #topic feature discussion 22:48:30 <cp16net> yeah there is alot of fluff that was added to redstack 22:48:45 <hub_cap> soooooo, features 22:48:58 <vipul> one question I had.. is there support for user/tenant level quotas? 22:49:18 <vipul> I think i saw a global quota, but not user level, maybe soemthing we'll need 22:49:18 <hub_cap> ya there is 2 configs 22:49:22 <hub_cap> oh ya 22:49:25 <cp16net> not separately 22:49:26 <hub_cap> its global, sry 22:49:29 <datsun180b> But that's for everyone, and not for individuals. 22:49:30 <cp16net> yeah 22:49:32 <hub_cap> we want it as well 22:49:42 <hub_cap> we mgiht want to look into turnstyle 22:49:45 <vipul> #info no user-level quotas, feature #1 add user-level quotas and update management APIs 22:50:00 <vipul> hub_cap, I thought that was more for rate limiting? 22:50:03 <vipul> could be wrong 22:50:16 <hub_cap> https://github.com/klmitch/turnstile 22:50:21 <hub_cap> #link https://github.com/klmitch/turnstile 22:50:36 <vipul> need to setup redis...argh 22:50:37 <hub_cap> i thikn it might just be rate limiting now 22:50:40 <hub_cap> good call vipul 22:50:46 <SlickNik> what's turnstyle? 22:50:52 <hub_cap> we were talking about integrating repose in teh future as well vipul 22:51:05 <hub_cap> so maybe there isint a need for it _in_ our application 22:51:07 <SlickNik> thx, will look. 22:51:22 <vipul> #info look at Turnstyle, repose as possible integration points 22:51:25 <hub_cap> #action chat up w/ the openstack people about whether all projects should maintain their own quotas 22:51:37 <vipul> not sure why this is not a 'common' thing yet 22:51:42 <hub_cap> i honestly dont know what the right answer is 22:51:50 <hub_cap> vipul: welcome to openstack :P 22:51:55 <hub_cap> mabye we can make it a common thing 22:52:12 <vipul> yea would make most sense there 22:52:20 <vipul> k, other features.. 22:52:23 <cp16net> yeah someone just needs to say that its common and then it becomes common 22:52:24 <cp16net> :) 22:52:33 <hub_cap> so yall had snapshots defined, was there any plan on moving that in? 22:52:39 <hub_cap> cp16net: no joke 22:52:52 <hub_cap> i think a general roadmap, going well in to next year should include 22:53:00 <vipul> Yes, that is still part of the plan.. just not sure when we'll get to it 22:53:12 <hub_cap> 1) snapshots, mycnf edits, rate limiting/quotas, and some sembalance of migrations 22:53:19 <hub_cap> oops i forgot to number them 22:53:20 <hub_cap> ahaha 22:53:29 <hub_cap> thats all #1 its a big ole feature 22:53:33 <SlickNik> All those are 1 feature?!? 22:53:38 <SlickNik> heh 22:53:39 <hub_cap> hahaha YES SlickNik 22:53:50 <vipul> what's wrong with migrations..? 22:54:15 <hub_cap> we need to do _extra_ stuff to make sure they work w/ our environment 22:54:23 <hub_cap> and to make sure our guest comes back online, etc.. 22:54:32 <hub_cap> blindly calling nova migrate is a bit scary :D 22:54:32 <datsun180b> We're getting there, though 22:54:49 <vipul> #info replication is also part of our roadmap 22:54:55 <hub_cap> most of the work is _def_ nova tho vipul 22:55:16 <vipul> hub_cap.. oh vm migrations.. nvm 22:55:18 <hub_cap> id lke to see some code that validates it on our end tho (i mean, there is a step that _has_ to validate it) 22:55:21 <vipul> i'm slow 22:55:27 <hub_cap> vipul: what migration were u talking about? 22:55:30 <vipul> db 22:55:31 <vipul> lol 22:55:39 <vipul> sqlalchemy 22:55:39 <hub_cap> lol 22:55:47 <hub_cap> nice :P 22:56:14 <vipul> honestly we haven't actually even started looking at that 22:56:26 <hub_cap> yes we want to work on replication but thats a big scary task and its on our radar further in the year 22:56:41 <hub_cap> so if u want to team up on it we might be able to devote some resources to research earlier 22:57:03 <hub_cap> lets make sure we all knwo what features we are working on so we can work on them as a large group, cuz then we all get to do less work :D 22:57:04 <SlickNik> Was the previous list in order of priority? 22:57:08 <dkehn> there will be a lot to replication, cloning, fail-over, master-master, et.c 22:57:09 <vipul> yep, that sounds good, i think we need to have a more formal planning meeting 22:57:16 <hub_cap> SlickNik: umm not 100% ordered but mostly 22:57:39 <vipul> dkehn: i don't think we have to address everything right away, failover doesn't have to be automated for example 22:57:39 <hub_cap> vipul: ya i thnk so too, maybe we do a google hangout for that and have a nice conversation about it 22:58:05 <hub_cap> dkehn: i agree, replication scares the bejesus out of me due to how hard it is 22:58:09 <dkehn> vipul, ouch manual failover with point time recovery.... 22:58:16 <vipul> #action vipul and hub_cap to set up formal roadmap/feature meeting 22:58:22 <yidclare> but people want the replication 22:58:25 <vipul> dkehn: baby steps :) 22:58:26 <dkehn> hub_cap, actually its not that bad 22:58:31 <hub_cap> hell i was telling my mgr today that turning replication on is easy 22:58:44 <esp1> are backups on the road map or is that part of migration? 22:58:52 <hub_cap> esp1: def 22:58:55 <dkehn> its is easy, its what you do when it goes to s@!t 22:58:56 <hub_cap> separate 22:59:00 <hub_cap> yup 22:59:00 <esp1> cool 22:59:04 <hub_cap> and its nicer w/ 5.5 too 22:59:18 <dkehn> semi-sync is very nice 22:59:18 <hub_cap> iirc the agent is no longer single threaded 22:59:25 <dkehn> only one commit lag 22:59:49 <hub_cap> im glad u feel beter about it than i dkehn :D 23:00:01 <hub_cap> last thing i need is the engineers scrambling all the time when it fails!! 23:00:17 <hub_cap> i mean honestly wrt features 23:00:21 <dkehn> thats why it need to be automated 23:00:30 <hub_cap> we need to do all the big mysql features right? its really about the order and whos working on what 23:00:34 <hub_cap> dkehn: yup :D 23:00:42 <hub_cap> but "automated" is such a fuzzy word 23:00:45 <vipul> dkehn hub_cap we need to have a bigger discussion on this 23:00:51 <hub_cap> vipul: absolutely 23:01:03 <hub_cap> we could go on for hrs just on replication :D 23:01:11 <cp16net> yup 23:01:12 <hub_cap> also, we might want to condiser some sort of scheduler 23:01:26 <hub_cap> not sure if its going overboard... 23:01:28 <hub_cap> but my thought is this 23:01:51 <hub_cap> we need ot have time windows to do things like backups, snapshots, etc... as to not overwhelm the entire environment 23:02:16 <hub_cap> so there needs to be som arbiter of that logic somewhere 23:02:20 <vipul> agreed 23:02:32 <vipul> although scheduler types of features may be a bit further out 23:02:37 <vipul> do we have any requirements now? 23:02:44 <hub_cap> we can start it _easy_ but my guess will be tha the scheduler will be pretty generic in openstack so we can start using it 23:02:45 <hub_cap> no not now 23:02:47 <hub_cap> def future 23:02:53 <esp1> wonder what the cool kids are using for a scheduler in python these days. 23:03:01 <hub_cap> esp1: roll your own :D 23:03:03 <vipul> cron? 23:03:07 <hub_cap> the #openstack way 23:03:08 <esp1> lol 23:03:17 <cp16net> lol 23:03:35 <hub_cap> i dont think its cron 23:03:38 <hub_cap> cron is fuzzy too 23:03:46 <hub_cap> i dont want to rely on external systems either 23:03:53 <cp16net> its a python daemon 23:03:57 <hub_cap> ive seen NASTY cron triggered hacky systems 23:04:01 <esp1> gotcha :) 23:04:15 <vipul> https://github.com/rackspace/Tempo 23:04:24 <hub_cap> but ya there was talk aabout making a generic scheduler 23:04:37 <hub_cap> vipul: now that makes sense 23:04:38 <vipul> anyone know if that's actually a real active thing? 23:04:39 <hub_cap> for a end user 23:04:48 <SlickNik> monitoring and managing cron jobs is a whole can of worms in itself. 23:04:53 <hub_cap> SlickNik: ya dude 23:05:02 <hub_cap> vipul: im not privy to the upper echelon of things @rax 23:05:05 <hub_cap> i havent even heard of that 23:05:18 <hub_cap> but its >8mo old 23:05:25 <vipul> k, we'll forget we ever saw it ;) 23:05:30 <hub_cap> ya 23:05:42 <vipul> k running over time 23:05:46 <hub_cap> but srsly the openstack guys were tlaking about the need for a generic scheduler framework 23:05:55 <hub_cap> since cinder, nova, and others will be using 23:05:58 <esp1> sorry to take things of course... 23:06:00 <vipul> would be nice 23:06:03 <hub_cap> so once we need to, we will ride that wave 23:06:08 <vipul> any timeframe? 23:06:12 <hub_cap> nope 23:06:15 <vipul> figures 23:06:18 <hub_cap> yup 23:06:24 <vipul> #topic Open Discussion 23:06:29 <SlickNik> We don't need it right away; perhaps they will have something by the time we need it? 23:06:36 <vipul> anything else we need to discuss? 23:06:44 <hub_cap> #info prereq for migrations - grouping of instances 23:06:51 <hub_cap> ^ ^ we need to start thinking about that in general 23:07:15 <hub_cap> before we go dive into things that require > 1 server to be grouped together 23:07:15 <vipul> is the host_id not reliable? 23:07:29 <hub_cap> how do we determine Master X, Slave Y 23:07:29 <vipul> meaning can't we lookup host_id of each instance to group things? 23:07:35 <hub_cap> we need a mapping of sorts 23:07:41 <vipul> i see 23:07:45 <hub_cap> and id like to make it generic enough to support other things like a hadoop setup 23:08:07 <hub_cap> ok tahts my random open discussion 23:08:11 <SlickNik> well, we need some sort of collection of host by id, I guess. 23:08:16 <vipul> #info consider tagging instances 23:08:23 <esp1> Has anyone run into issues running 12.04 w/ redstack? I have pending task on our end that I'd like to close. 23:08:46 <vipul> i haven't tried yet on a fresh vm 23:08:57 <vipul> hub_cap you were making some mods 23:09:06 <SlickNik> I think the last one that hub_cap hit was upping mysql to 5.5...IIRC... 23:09:08 <vipul> 12.04 good to go? 23:09:16 <esp1> I've tried on VMWare Fusion and our HPCloud instance... 23:09:28 <hub_cap> i have a fresh vm 23:09:31 <hub_cap> and everythign is good now 23:09:40 <esp1> cool, thx! 23:09:42 <hub_cap> esp1: do u have the issuez? 23:09:45 <hub_cap> if so chat w/ me 23:09:55 <hub_cap> iirc i fired it up and ran it 23:09:59 <vipul> #info 12.04 support completed 23:10:27 <esp1> hup_cap: I think it's just a lingering task that we kept around from last sprint. But I think we got it resolved. 23:10:44 <vipul> ddemir is working on getting some unit tests into python-reddwarfclient 23:11:01 <hub_cap> esp1: sweet 23:11:26 <vipul> anyone got anything else? 23:11:38 <SlickNik> good by me. 23:11:59 <vipul> k, thanks for attending folks 23:12:07 <dkehn> good night all 23:12:10 <esp1> thx! 23:12:24 <vipul> dkehn: drink a few margaritas for us 23:12:31 <vipul> in mexico 23:12:33 <dkehn> I am so there 23:12:59 <SlickNik> heh, have fun dkehn. :) 23:13:03 <SlickNik> thanks all, see you around in #reddwarf! 23:13:05 <hub_cap> nice :P 23:13:13 <hub_cap> woah 23:13:19 <hub_cap> we dont have a ton of action items!!! 23:13:38 <vipul> did we miss any? 23:13:51 <vipul> if not ending meeting 23:13:51 <hub_cap> nope 23:13:56 <hub_cap> im just surprised :P 23:14:03 <vipul> #endmeeting