22:00:14 <hub_cap> #startmeeting reddwarf
22:00:17 <openstack> Meeting started Tue Dec 18 22:00:14 2012 UTC.  The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot.
22:00:18 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
22:00:20 <openstack> The meeting name has been set to 'reddwarf'
22:00:21 <hub_cap> hi all
22:00:27 <datsun180b> howdy howdy howdy
22:00:41 <cp16net> hai
22:00:51 <vipul> hey
22:00:53 <esp1> hello
22:01:10 <imsplitbit> greetings
22:01:27 <cp16net> \m/
22:01:31 <hub_cap> ok so, lets start /w the action items!
22:01:36 <hub_cap> #link http://eavesdrop.openstack.org/meetings/reddwarf/2012/reddwarf.2012-12-11-22.00.html
22:01:42 <hub_cap> #topic action items
22:01:50 <hub_cap> SlickNik: perfect timing
22:01:57 <hub_cap> 1) SlickNik to discuss w/ HP about os_admin
22:02:00 <SlickNik> hello everyone.
22:02:13 <hub_cap> we dont have time for hellos SlickNik (JK hehe)
22:02:17 <hub_cap> sup
22:02:27 <SlickNik> We discussed that we will go with os_admin for now; and switch to root later if needed.
22:02:35 <SlickNik> Haven't really gone beyond that point.
22:02:54 <SlickNik> (Or found a reason necessary to switch away from os_admin for now)
22:03:00 <hub_cap> cool, that works. lets say thats _offical_ then
22:03:07 <SlickNik> okay, sounds good.
22:03:31 <hub_cap> #info no need to config a value to change os_admin to another user for now
22:03:44 <hub_cap> re quotas, which i assume is mine
22:03:56 <hub_cap> repose supports rate limits and _absolute_ limits
22:04:02 <hub_cap> which is their version of a quota
22:04:20 <vipul> hub_cap, can it also manage quotas... like set/delete
22:04:29 <vipul> or just enforce
22:04:56 <hub_cap> yup. i believe we are going to move forward w/ that approach for now. i know other teams @ rax use it, and it keeps heavy lifting out of the infra
22:05:22 <hub_cap> im not sure if it means we remove existing limits or not tho.... i know nova still does _some_ limit enforcing
22:05:59 <hub_cap> so if hp wants to not use repose im cool w/ adding limits/quotas to the app as well in some way
22:06:31 <vipul> k, we still need to take a closer look at it
22:06:36 <hub_cap> sry my lappy just freakked on me...
22:06:38 <vipul> i think it's worth filing a blueprint
22:06:46 <vipul> and we can decide which approach to take
22:06:49 <hub_cap> sure vipul ill let you make that call
22:07:00 <vipul> #action vipul to file blueprint on quota support in Reddwarf
22:07:19 <hub_cap> that takes care of #2 and #3, and re 4 and 5, we arent doing anything, so skipping them for now
22:07:19 <hub_cap> :P
22:07:38 <SlickNik> sounds good :)
22:07:47 <hub_cap> re #6 we still have not come to consensus on that, have yall talked to mordred re multiple images
22:08:27 <hub_cap> judging from what u said vipul in #reddwarf, id assume no not yet
22:08:28 <vipul> I don't know if we have a good idea where the percona bits will live yet.. we are going to push up the 'vanilla' flavor into reddwarf to get things started
22:08:36 <hub_cap> good deal
22:08:42 <vipul> we still nedd to find a home for percona flavor
22:08:44 <hub_cap> #action hub_cap SlickNik vipul to discuss w/ mordred the implications of multiple images housed in reddwarf and how we woudl do it
22:08:53 <hub_cap> lets still chat about it sometime this wk or next
22:08:58 <hub_cap> sound good?
22:09:05 <vipul> yep
22:09:21 <SlickNik> Yeah, the current POR is to put the community mysql elements in reddwarf-integration.
22:09:50 <hub_cap> SlickNik: vanilla mysql?
22:10:03 <SlickNik> yup
22:10:07 <vipul> yep, community = vanilla
22:10:10 <vipul> :p
22:10:28 <hub_cap> cool. as per #7, it looks like i have a few small changes to make (thx cp16net) and then another review cycle
22:10:37 <hub_cap> id like to get it merged in, and i dont think grapex has looked @ it yet
22:10:50 <vipul> how do the integration tests look against it?
22:10:53 <hub_cap> i also want to run the full suite of tests to see how far they wil get to make sure im not doing something dumb
22:11:01 <hub_cap> vipul: ive only run simple-tests so far
22:11:06 <hub_cap> adn they passed
22:11:09 <vipul> woohoo
22:11:18 <vipul> that's all we've been able to get passing anyway :)
22:11:23 <grapex> hub_cap: Sorry... I'll review soon.
22:11:30 <hub_cap> lol :) grapex said hes been able to get thru the migration calls
22:11:31 <grapex> So a quick note about the simple tests
22:11:38 <hub_cap> grapex: plz do, cuz i can always +2 it from our side ;)
22:11:48 <grapex> hub_cap: I got mostly through resize once, before my VM crapped out.
22:12:16 <cp16net> i've had nothing but trouble
22:12:25 <cp16net> maybe thats because its my middle name....
22:12:26 <grapex> So I checked in a change to the tests which keeps the guest mgmt apis from being called. I think that was what was causing one of the failures when the "blackbox" group was run.
22:12:32 <hub_cap> yes the qemu tests are ungoldy slow
22:12:35 <hub_cap> cp16net: nice
22:12:35 <cp16net> but tests are very inconsistent for me
22:12:48 <esp1> yeah I'm sure we could add a few more tests to the simple ones if needed.
22:12:48 <grapex> As soon as we pass the current roadblocks, it seems that we should go back to "blackbox" and see how far we can get.
22:12:56 <hub_cap> grapex: so in theory --group blackbox should get me pretty far eh?
22:13:04 <grapex> Yes. I witnessed it!
22:13:05 <hub_cap> vipul: are yall running i7s?
22:13:09 <esp1> grapex: cool I'll check them out today.
22:13:10 <grapex> Just once! Then my VM caught on fire.
22:13:28 <vipul> we're running on cloud instances
22:13:36 <cp16net> yeah all the hp'ers are running i7 ssd 16gb...
22:13:50 <cp16net> at least all those that i am jealous of...
22:13:52 <cp16net> :)
22:14:03 <vipul> only a couple people running locally in vmware
22:14:05 <esp1> yep, I think most of us are on i7's
22:14:09 <datsun180b> nice
22:14:10 <grapex> vipul: How are you syncing files from your local working directories with the cloud instances?
22:14:22 <annashen> i7 maybe, not ssd
22:14:34 <hub_cap> ah vipul its possible networking gets jacked on cloud instances...we had tons of problems on cloud
22:14:35 <SlickNik> rsync over ssh for me.
22:14:39 <hub_cap> ymmv :)
22:14:43 <vipul> we're not :) -- steveleon has spent some time deploying reddwarf seaprate from the rest
22:14:46 <jcooley> annashen: you can replace your HD with an SSD.
22:14:59 <hub_cap> jcooley: can u send me one too?
22:15:04 <cp16net> SlickNik: oh so you are not using shared folders to the vm
22:15:06 <vipul> lol
22:15:07 <esp1> lol
22:15:22 <hub_cap> ok so... the speed issue is an issue
22:15:30 <vipul> jcooley, hub_cap, we'd like to personally deliver it to austing
22:15:34 <vipul> austin
22:15:35 <jcooley> hub_cap: think it's just DevStack on a cloud instance
22:15:37 <hub_cap> #action hub_cap to look into slowness issues w/ qemu
22:15:42 <esp1> cp16net: I spent some time trying to get shared folders to work on VMware fusion, so far no-go
22:15:58 <hub_cap> jcooley: ok but im still not 100% sure that a ton of guest stuff is exercised in devstack
22:16:01 <jcooley> hub_cap: :)
22:16:08 <cp16net> esp1: yeah you have to manually install the vmware tools
22:16:09 <hub_cap> vipul: plz do
22:16:11 <cp16net> then it will work
22:16:25 <hub_cap> for instance, the agent is calling to a apt repo and doing other things
22:16:25 <jcooley> hub_cap: we are looking at guest coverage
22:16:29 <cp16net> #link https://help.ubuntu.com/community/VMware/Tools
22:16:30 <SlickNik> cp16net: Nah, I'm using Virtualbox and shared folder support is flaky. Moreover rsync works with cloud instances that I have as well; so far it's been pretty robust.
22:16:33 <hub_cap> and if networking is working "just enough" then....
22:16:50 <hub_cap> ill look @ it on a cloud server too, but it took extra stuff to get it working
22:16:53 <hub_cap> iirc
22:16:54 <esp1> cp16net: yeah I did that a bunch.  I'll have to ping you sometime to see if I'm doing something wrong
22:17:02 <hub_cap> even w/ devstack
22:17:09 <cp16net> yeah i get it working but then i think its caused other issues...
22:17:19 <hub_cap> SlickNik: vbox is terible man... :)
22:17:24 <cp16net> i've got a few snapshots i go back to
22:17:24 <jcooley> SlickNik: I use Virtualbox and Dropbox
22:17:24 <hub_cap> we had tons of problems w/ it
22:17:30 <jcooley> Dropbox has a linux client...
22:17:42 <vipul> that's not a bad idea :)
22:17:44 <hub_cap> ok so back to topic
22:17:48 <cp16net> ok i think we have digressed...
22:17:51 <cp16net> yeah
22:17:56 <hub_cap> ill make sure i run some more tests and look into the slowness this wk in vbox
22:17:57 <hub_cap> err
22:17:58 <hub_cap> vmware
22:18:12 <hub_cap> ok that takes us to the end of our action items
22:18:22 <hub_cap> #topic testing updates
22:18:53 <vipul> let's get those reviews merged!
22:19:13 <hub_cap> well lets chat about the one that we are confused on
22:19:25 <hub_cap> https://review.openstack.org/#/c/18285/
22:19:41 <hub_cap> why exactly are we removing it?
22:19:43 <vipul> esp ^?
22:19:48 <hub_cap> esp1: ^ ^
22:19:57 <grapex> esp1: I know this one! :)
22:20:03 <hub_cap> grapex: GO
22:20:11 <esp1> the test is coded in away that it will ignore this test if the key/value are omitted
22:20:13 <grapex> IIRC So it seems that when volume support is disabled, this test is failing.
22:20:46 <grapex> esp1: I think the question is, why ignore the test?
22:20:47 <esp1> I just wanted to turn it off by default to get the happy path going
22:20:57 <hub_cap> im not sure i like disabling more tests :)
22:21:04 <hub_cap> weve turned lke _all_ of them off
22:21:07 <hub_cap> *like
22:21:07 <esp1> I was going to circle back and address it in another bug
22:21:14 <esp1> true.
22:21:15 <hub_cap> ok can we just address it then?
22:21:16 <cp16net> thats how you get them passing ;)
22:21:18 <grapex> I've been able to run it. I believe the issue is that it fails when volume support is disabled.
22:21:21 <hub_cap> cp16net: lol
22:21:25 <hub_cap> grapex: not exactly
22:21:27 <esp1> sure.
22:21:30 <hub_cap> ive had it fail sporadically on me
22:21:37 <hub_cap> vipul witnessed it w/ me yesterday
22:21:46 <hub_cap> it failed first time, and passed 2x times after
22:21:51 <esp1> yeah it's very inconsistent
22:21:52 <hub_cap> volume support was the same
22:21:58 <hub_cap> seems a bug worth fixing
22:22:00 <vipul> so volume_support is enabled by default and _should_ be working
22:22:07 <vipul> why are the tests related failing?
22:22:07 <grapex> I agree we shouldn't disable it. The test is probably finding a real bug.
22:22:12 <hub_cap> grapex: yup
22:22:22 <hub_cap> vipul: those tests get skipped
22:22:30 <hub_cap> if a dependent test fails
22:22:53 <esp1> so the 'real' bug seems to be that the client expects an OverLimit exception to be thrown...
22:23:15 <esp1> but it never is so I was thinking maybe the validation in the server code is whack.
22:23:31 <hub_cap> sounds like u got a good handle on it esp1, u gonna tackle it?
22:23:38 <esp1> sure
22:23:41 <hub_cap> tight
22:23:45 <hub_cap> so lets abandon that review
22:23:49 <SlickNik> sweet
22:23:52 <hub_cap> next, https://review.openstack.org/#/c/18282/
22:24:01 <hub_cap> ive got a -1 on that, is it being worked on?
22:24:07 <hub_cap> lol also esp1
22:24:54 <cp16net> yeah good point hub_cap
22:24:56 <vipul> hub_cap so you want a fake_mode test, but that change is to test code
22:24:58 <esp1> this one is to avoid a null pointer in the test code
22:25:15 <hub_cap> are u kidding me am i that much of a moron
22:25:17 <esp1> I was gonna write you a fake mode test in reddwarf
22:25:19 <cp16net> lol
22:25:42 <grapex> On a related idea, do we want to run through the tests multiple times, in some cases disabling volume support?
22:26:06 <vipul> grapex: yes, was wondering how we'll exercise all code paths
22:26:07 <hub_cap> thast not a bad idea grapex
22:26:07 <esp1> yeah it's a bit tricky running the test multiple times..
22:26:12 <SlickNik> @grapex: I think that would be a good idea. I don't see why you wouldn't want to do that.
22:26:16 <esp1> most tests are not idempotent
22:26:17 <grapex> At Rax we don't disable volumes. If you want to run RD like that, I bet we could find a ton of bugs if we just run the tests once in fake mode.
22:26:28 <esp1> well maybe not most, but the ones that create instances
22:26:50 <grapex> It doesn't matter. Just change the config and run the current tox tests again.
22:26:53 <hub_cap> esp1: this is true. the cleanup does not clean up properly
22:27:01 <hub_cap> grapex: u mean fakemode / unit only
22:27:03 <hub_cap> not integration
22:27:09 <vipul> i think we should try to hit all code paths, regardless of how we end up running in prod
22:27:16 <hub_cap> vipul: +1
22:27:28 <esp1> yeah that's fine
22:27:28 <grapex> hub_cap: Yes. But the integration tests in fake mode will weed out all the big things.
22:28:08 <hub_cap> that seems pretty easy. we sohuld be able to seed values in the run_tests quite easily in tox
22:28:55 <vipul> anyone know how Nova/other projects do this?
22:29:09 <grapex> hub_cap: Quick note: even if coverage is 100%, the integration tests could still find cases in fake mode where disabling volumes will break the code.
22:29:50 <hub_cap> vipul: nova only seems to care about unit tests :)
22:30:01 <vipul> they do have tempest though right
22:30:07 <hub_cap> ya is that working?
22:30:09 <grapex> vipul: Unit tests and more unit tests. But if you turn off something like volumes, which code paths might expect to be present, the unit tests, which set such configuration values in each setup or tear down, will not find it.
22:30:17 <vipul> wonder if they toggle flags and do multiple runs?
22:30:52 <hub_cap> vipul: dunno...
22:30:52 <vipul> supposedly those are running in the devstack-vm-tempest gates
22:30:53 <esp1> yeah I think volume support should be defaulted as True if it is indeed working
22:31:25 <hub_cap> esp1: im cool w/ that too, but i dont want volume support false to end up buggy
22:31:32 <hub_cap> not everyone can afford a hp san :P
22:31:54 <hub_cap> we can tho!!! and its nice
22:31:57 <esp1> hub_cap: agreed
22:31:58 <grapex> Honestly, I think if we just run the fake mode tests twice we'll find everything wrong with volume_support = False.
22:32:00 <vipul> we sure can't ;)
22:32:07 <cp16net> lolz
22:32:11 <grapex> Then we clean up edge cases with extra unit tests.
22:32:15 <hub_cap> vipul: hahah
22:32:23 <cp16net> grapex: thats a good point
22:32:27 <hub_cap> grapex: its decided, we need a way to run thru tests 2x
22:32:28 <grapex> Our Jenkins build currently runs the tests in several configurations.
22:32:29 <hub_cap> whos on it?
22:32:29 <cp16net> and fake mode tests are quick
22:32:41 <vipul> grapex: so you're saying run once with volume suppor and once without?
22:32:45 <grapex> The issue is we run that on a Cloud Server. We could do that for the public right now by adding it to the tox file
22:32:51 <grapex> vipul: Yes.
22:33:00 <SlickNik> both in fake mode?
22:33:02 <cp16net> yes
22:33:03 <grapex> Yes.
22:33:11 <grapex> Now we still need unit tests, but I've honestly found a lot of bugs this way.
22:33:20 <SlickNik> And for the real mode integration tests, we run only with volume support ON
22:33:25 <grapex> Well
22:33:35 <grapex> That's what I'd prefer
22:33:41 <hub_cap> SlickNik: im ok w/ that for now. unless we have unlimited resources :D
22:34:04 <hub_cap> so whos taclking that?
22:34:10 <vipul> yea i think we'll have to defer testing all flags for now, at least in real mode
22:34:10 <grapex> but in the utopian future where both our companies are tied into Gerrit, we'll have other jobs that run in real mode with different configurations.
22:34:26 <hub_cap> or whos tackling the bug making of it?
22:34:35 <vipul> i can take it
22:34:47 <hub_cap> cool vipul
22:34:51 <vipul> although it sounds eerily similar to what esp1 took
22:34:51 <hub_cap> esp1: https://review.openstack.org/#/c/18282/ failed pep8
22:35:00 <grapex> vipul: Let me know if you need anything.
22:35:11 <vipul> grapex, exp1: k, i'll work with both of you guys
22:35:26 <vipul> #action vipul to investigate volume_support on/off in fake mode
22:35:27 <esp1> hub_cap: k, I'll fix it up
22:35:27 <hub_cap> cool action item it up vipul so we dont dup it
22:35:32 <hub_cap> :)
22:35:44 <hub_cap> anything else re testing?
22:36:05 <SlickNik> Nothing else from my end.
22:36:07 <vipul> how about a couple of hte reivews for the client and guestagent
22:36:15 <vipul> https://review.openstack.org/#/c/17867/
22:36:16 <cp16net> not sure if issues i have run into are specific to testing or env
22:36:18 <SlickNik> regarding tests, I mean.
22:36:28 <vipul> this is waiting on grapex
22:36:32 <cp16net> i'll bring it up later in chan if it is
22:36:56 <hub_cap> grapex: plz go thru the reviews today sir
22:36:58 <hub_cap> #topic image updates
22:37:06 <steveleon> we are still chuckling along with the guestagents tests
22:37:15 <esp1> cp16net: I have not been able to successfully re-clone an existing redstack install
22:37:17 <hub_cap> steveleon: laughing? :P
22:37:18 <steveleon> attacking dbaas and pky
22:37:26 <SlickNik> I think he means chugging* :)
22:37:29 <juice> steveleon: hopefully you mean chugging
22:37:31 <hub_cap> i know i was being silly
22:37:37 <hub_cap> :D
22:37:38 <steveleon> yes.. chugging .. haha
22:37:50 <SlickNik> Although a good laugh is seldom a bad thing.
22:37:51 <cp16net> chugging what?
22:37:57 <hub_cap> hahah BEEEERRRRZZZZ
22:37:58 <vipul> some of that eggnog
22:38:04 <hub_cap> or that, spiked
22:38:07 <cp16net> lolz :)
22:38:13 <steveleon> im also trying to get unittests to run on testr...
22:38:19 <hub_cap> steveleon: thats cool
22:38:22 <grapex> hub_cap vipul: I 'll look through that review today
22:38:24 <hub_cap> id like to see that a-working
22:38:27 <hub_cap> grapex: <3
22:38:28 <steveleon> getting some name '_' is not defined when it is trying to import common.exception
22:38:41 <hub_cap> steveleon: thatll be fixed w/ the new oslo
22:38:49 <hub_cap> _ is actually a function in oslo
22:38:53 <hub_cap> gettextutils
22:39:01 <vipul> hub_cap any idea how to run tests in IDE
22:39:05 <hub_cap> from reddwarf.openstack.common.gettextutils import _
22:39:08 <steveleon> ok.. so if i merge your patch, i should be good, right?
22:39:12 <hub_cap> steveleon: aye
22:39:13 <vipul> seems like the run_tests does something to register that _
22:39:29 <vipul> if you want to run it idnependently, you really can't
22:39:33 <grapex> vipul: It does
22:39:35 <hub_cap> vipul: hmmm havent tried in ide, but id say try wh the olso stuff vipul
22:39:37 <SlickNik> Well, that was easy...
22:39:42 <grapex> If you look at Nova and other projects, they do similar things to register that stuff.
22:39:49 <grapex> Before their tests run
22:39:55 <grapex> I think it's usually in the __init__.py file
22:39:59 <vipul> k, so running outside of that is not a good idea
22:40:26 <hub_cap> well i think thats a artifact of the past in regard to _
22:40:40 <hub_cap> i used to get those errors when just importing classes from the repl
22:40:46 <hub_cap> doesnt happen anymore w/ the new updates
22:41:07 <vipul> k, nice -- that might fix it then
22:41:16 <hub_cap> yup i believe it will vipul
22:41:20 <steveleon> run_tests run tests with proboscis. So im trying to run it outside
22:41:22 <hub_cap> pull down my patchset and test it out once
22:41:35 <vipul> yep, on my list
22:41:38 <steveleon> ok.. will try your patch after the meeting
22:41:49 <hub_cap> coolness
22:41:53 <hub_cap> so back to images
22:41:55 <grapex> So I'd like to ask, why are we hurried to move to testr? The big difference between it and Nose is speed, but we don't have a speed issue currently. Is the CI team planning on making every other method of testing incompatible?
22:42:10 <hub_cap> grapex: we talked about using it for unit tests
22:42:16 <steveleon> ahh.. wasnt aware we were on the wrong topic
22:42:18 <vipul> grapex: i do agree i think it shoudl be a little lower priority
22:42:19 <hub_cap> and still using proby for fakemode/integration
22:42:33 <vipul> since we still need to get test coverage! :)
22:42:35 <hub_cap> its up to yall if u want to move forward w/ it...its not _necessary_ now
22:42:47 <steveleon> im just using testr on unittests.
22:42:55 <grapex> hub_cap: I know. My only concern is that the two methods will be in conflict.
22:43:05 <hub_cap> it might even be in oslo by the time we go that route
22:43:16 <grapex> IIRC we can testtools which is what testr uses, so we should be good.
22:43:18 <SlickNik> I thought we were only moving to it with _new_ tests; not necessarily with old ones we already have.
22:43:19 <hub_cap> hmm lets discuss offline grapex. id like to understand this conflict
22:43:22 <vipul> steveleon: are we replacing the proboscis backedn to invoke testr?
22:43:34 <hub_cap> vipul: not a good idea
22:43:38 <steveleon> yes... im bypassing proboscis
22:43:45 <hub_cap> oh u mean for unit only
22:43:48 <steveleon> just for unit tests though
22:43:54 <hub_cap> for fakemode/integration tests its not a good idea to bypass proby
22:44:01 <hub_cap> ps proby == proboscis
22:44:12 <annashen> talking about speed, is this change meant to be merged in the near future? https://review.openstack.org/#/c/17561/
22:44:17 <grapex> Ok. I just wanted to know if the CI team was making it mandatory or stuff was going to stop working.
22:44:20 <annashen> how soon it will be merged?
22:44:43 <grapex> annashen: I need to review. :(
22:44:43 <hub_cap> annashen: there are a few small updates to make
22:44:43 <annashen> sorry if it is too far away from image update
22:44:49 <hub_cap> and id like to run thru integration tests
22:44:57 <hub_cap> and of course, grapex needs to review ;)
22:44:57 <SlickNik> @grapex Don't think it's mandatory yet. 'Twas just a recommendation…
22:44:59 <hub_cap> its fine annashen
22:45:10 <grapex> SlickNik: Ok.
22:45:23 <vipul> back to image?
22:45:27 <hub_cap> so vipul did hp make much progress w/ images in rddwarf
22:45:31 <vipul> juice?
22:45:42 <steveleon> i think we need to talk more about testr. Perhaps it is best to hold on it for now until we all decide what is best to do in this moment
22:45:44 <juice> yes - ready for integration with rdstack
22:46:02 <juice> so the elements will be include in redstack
22:46:12 <annashen> i see, thanks y'all for update
22:46:13 <juice> redstack clones disk-image-builder
22:46:23 <dkehn> and devstack/lib/reddwarf?
22:46:28 <juice> copies the elements into disc image builder and then invokes the machine
22:47:04 <juice> dkehn: was that a question for me?
22:47:15 <dkehn> in general
22:47:31 <SlickNik> Vipul and I were discussing whether diskimage-builder needs to be a part of devstack.
22:47:38 <SlickNik> And we thought it really didn't.
22:47:45 <dkehn> juice, if you can answer cool, else I believe SlickNik can
22:48:10 <SlickNik> Belonged more to the test setup _after_ devstack in our gate process.
22:48:25 <vipul> i think we need to extend that conversation with folks here.. the gist of it is.. Devstack with the Reddwarf flags will install all of reddwarf.. but should it also build the image
22:48:37 <hub_cap> juice: do u know how long it takes to run the new diskimage builder? just curious... i think the old one took 5<x<10 min
22:48:54 <hub_cap> vipul: how does it work for the other projects?
22:49:05 <juice> I can time it but your equation there is about right
22:49:07 <vipul> hub_cap: so i don't think any other project creates images
22:49:12 <grapex> Is Heat in devstack yet?
22:49:19 <vipul> they just grab the cirros ones
22:49:23 <juice> it is pretty consistent and depends on bandwidth for pulling down packages
22:49:34 <hub_cap> ahh... i thought monty did some image building stuffs w/ ci
22:50:00 <vipul> not ringing a bell, i'll have to ping him offline
22:50:01 <dkehn> Ci pretty much uses cirros as well
22:50:03 <juice> apparently having a apt cache can reduce the time tremendously
22:50:16 <SlickNik> Yeah, we planned to run this by the meeting today and get all your thoughts on it.
22:50:30 <hub_cap> juice: yup but thats not a simple process
22:50:55 <juice> hub_cap: just an option
22:50:59 <hub_cap> well it is... but its just as slow if ure running some sort of proxy in a teardown env
22:51:00 <SlickNik> @grapex, yes heat is already part of devstack
22:51:03 <juice> I'll get some timings for you
22:51:13 <hub_cap> juice: yup i looked into it when i was doing the qemu image builder
22:51:16 <hub_cap> its def slow....
22:51:19 <grapex> SlickNik: Just wondering if they made images, vipul said they just run cirros.
22:51:21 <dkehn> if the image is just a testing issue then lets build image as part of the test startup
22:51:28 <SlickNik> FWIW, None of the components in devstack today build any images.
22:51:39 <SlickNik> using diskimage-builder.
22:51:52 <vipul> i guess the question also is what does a "complete install of redddwarf" mean
22:51:52 <grapex> dkehn: is it really though? Nova gives you a working Nova install. Reddwarf really doesn't work without some kind of image with a guest.
22:52:03 <cp16net> there is some special sauce in the built image to have the ssh key
22:52:13 <grapex> Seems like even though building an image in devstack is different, its in the spirit of devstack.
22:52:20 <cp16net> so its magical being able to get into the instance from your host vm
22:52:29 <hub_cap> lol if its bash its in the spirit of devstack :P
22:52:35 <SlickNik> lolol
22:52:39 <vipul> grapex: yes, that's the thing -- we may want to consider it for devstack, if reddwarf is truly useless ithout that image
22:52:40 <grapex> Although I don't have strong feelings about it, we can keep it in RedStack.
22:52:45 <hub_cap> i wonder if there is a way to inject a guest into the cirros image post install
22:52:55 <hub_cap> we migth want to look @ alterative ways to "boot" so to speak
22:53:10 <vipul> add another upstart job
22:53:14 <vipul> that does the sync
22:53:16 <dkehn> I vote for the image, talking of devstack
22:53:20 <grapex> hub_cap: Then we build a "fake" MySQL that the guest talks to... and we have yet another version of fake mode!
22:53:23 <dkehn> makes it easier to work with
22:53:25 <hub_cap> vipul: cloud init stuffs?
22:53:27 <grapex> j/k
22:53:36 <vipul> hub_cap: that could work, will slow boot time
22:53:50 <vipul> since mysql won't be baked in
22:53:54 * SlickNik is just glad grapex was kidding
22:54:03 <hub_cap> sure but it nullifies the need for special oneoff images
22:54:22 <hub_cap> if yo have to build a image every test tho its gonna be slow before the tests even start :P
22:54:31 <hub_cap> (re stackforge tests ^ ^ )
22:54:57 <vipul> it wouldn't be for every test, just once during devstakc install
22:55:24 <hub_cap> good point, we do spin up like 5 instances in teh full suite of tests
22:55:38 <grapex> vipul: So we pre -RDL rewrite we had some test groups that actually installed stuff, which redstack kicked off during the install phase.
22:55:44 <vipul> i think maybe we leave it in Redstack for now.. to get this whole thing going
22:55:51 <hub_cap> vipul: +1
22:56:04 <hub_cap> but lets keep it in our head to find a better way to inject into a image if possible
22:56:11 <SlickNik> We can always move it to devstack later, if we feel the need, so to speak.
22:56:16 <hub_cap> this is all rediculously easy w/ ovz
22:56:26 <hub_cap> chroot && tar
22:56:29 <cp16net> yup....
22:56:49 <vipul> the bright and shiny future ;)
22:56:49 <cp16net> or vzctl exec blah...
22:56:59 <hub_cap> def, lets go to that
22:57:05 <imsplitbit> everything is ridiculously easy with ovz
22:57:11 <juice> hub_cap: we could remount a std-image and then customize it with chroot
22:57:12 <hub_cap> if no one has any more to add to images
22:57:36 <cp16net> not sure if this is image related...
22:57:39 <hub_cap> that could be anotehr option juice, and im sure itll be faster than a full bore image build...
22:57:50 <cp16net> do others see the sudo apt-get update take forever in a new instance?
22:58:01 <grapex> cp16net: I do.
22:58:09 <juice> cp16net
22:58:10 <grapex> It's several minutes.
22:58:19 <vipul> in a guest?
22:58:21 <cp16net> yeah ~2-5 min
22:58:22 <cp16net> yes
22:58:28 <juice> that is another thing that can be reduced (removing redundant calls to apt-get)
22:58:33 <esp1> cp16net: yeah I think it takes a while
22:58:35 <hub_cap> yes its dumb slow
22:58:46 <SlickNik> yeah, takes at least a couple of minutes for me too.
22:58:54 <vipul> is that a qemu thing?
22:59:03 <juice> since the apt-get update is run on the guest as it is being built - it shouldn't run as it is being booted
22:59:27 <hub_cap> juice: im fine w/ that
22:59:31 <cp16net> juice: it should already be built by the time its running
22:59:40 <juice> the only risk there is if the image is left in glance too long apt could get out of date
22:59:51 <juice> however we can make that conditional if we want to get fancy
23:00:24 <hub_cap> juice: the installs make sure to call update first
23:00:27 <hub_cap> cuz that will fail
23:00:28 <juice> cp16net: I think I say in the upstart conf for guest that it is being executed in there (i.e. apt-get update)
23:00:31 <hub_cap> otherwise no biggie
23:00:44 <SlickNik> The time delta between building the image and running the tests shouldn't be that long… :P
23:00:45 <hub_cap> juice: its in the bootstrap, not the init
23:01:00 <hub_cap> /root/bootstrap.sh i think?
23:01:05 <cp16net> something like that
23:01:30 <hub_cap> we can nuke that once its built in the image
23:01:34 <juice> bootstrap-init-mysql
23:01:42 <juice> done
23:01:58 <hub_cap> cool
23:02:03 <vipul> juice: any ETA on the image patch to reddwarf-integration
23:02:30 <juice> i'll create a blueprint today and try to get it pushed today if not first thing in the morn
23:02:45 <vipul> awesome
23:02:59 <hub_cap> thats super
23:03:10 <hub_cap> plz run simple-tests from scratch too
23:03:35 <SlickNik> I think there might already be a bp for it.
23:03:36 <hub_cap> ok so ovz support now?
23:03:38 <hub_cap> we are runnign over
23:03:52 <vipul> sure, let's move on
23:03:52 <hub_cap> i just wanted to say real quick that imsplitbit is spinning up his nova env now to start work on ovz
23:04:15 <vipul> nice!
23:04:21 <SlickNik> sweetness...
23:04:41 <imsplitbit> yep
23:04:43 <hub_cap> def nice
23:04:50 <cp16net> w00t
23:04:54 <hub_cap> there is a LOT thats changed from where we are on ovz and new nova
23:05:01 <hub_cap> so he will be "spinning up" for a while i suspect
23:05:04 <vipul> are you guys running a older nova?
23:05:07 <imsplitbit> tonight and tomorrow morning I'll be merging in migrations into the public branch
23:05:10 <imsplitbit> then rebasing
23:05:20 <hub_cap> vipul: a bit yes :)
23:05:31 <imsplitbit> from nova trunk
23:05:44 <imsplitbit> I like to live dangerously
23:05:53 <hub_cap> hell yes imsplitbit
23:06:02 <cp16net> its the only way to be
23:06:06 <vipul> imsplitbit: that's great
23:06:18 <hub_cap> so bug imsplitbit if u want to know his progress :D
23:06:25 <hub_cap> dj hates us in irc
23:06:32 <cp16net> imsplitbit: whats the status?
23:06:38 <cp16net> :)
23:06:43 <cp16net> j/k
23:06:46 <SlickNik> heh
23:07:12 <hub_cap> hahh
23:07:28 <hub_cap> #info imsplitbit's the ovz man
23:07:38 <hub_cap> oops, lol i never moved topic
23:07:40 <cp16net> i'll be working on the public tests if i can make my vm more consistent
23:07:45 <hub_cap> oh well, do yall have anything else to add?
23:07:47 <hub_cap> cp16net: sweet
23:07:56 <cp16net> looks like i got some funky keystone cms error
23:08:04 <cp16net> and thats become consistent...
23:08:11 <vipul> cp16net, hub_cap: thanks, we can use all the help we can get on the real mode tests
23:08:39 <hub_cap> yes we need to make those happen SOON
23:08:50 <cp16net> yeah right now i attempt flavor list call and i get rejected by keystone
23:08:50 <vipul> oh another thing
23:08:52 <hub_cap> ill give yall info on where i land w/ the tests tomorrow
23:08:58 <hub_cap> cp16net: OH CRAP
23:09:01 <hub_cap> i know why
23:09:01 <cp16net> yup
23:09:06 <cp16net> YOU DO???
23:09:09 <vipul> dkehn is working on gettting the reddwarf-vm-gate job in jenkins... so we're going to gate on that
23:09:15 <hub_cap> https://review.openstack.org/#/c/17561/10/etc/reddwarf/api-paste.ini
23:09:22 <vipul> this will not be a full tempest integrated thing to begin with
23:09:23 <esp1> thx!
23:09:23 <hub_cap> paste.filter_factory = keystoneclient.middleware.auth_token:filter_factory
23:09:34 <hub_cap> cp16net: can u bugfix that for us?
23:09:40 <vipul> it'll pull reddwarf-integration and run tests from that repo instead of tempest
23:09:41 <cp16net> thats the line
23:09:44 <hub_cap> look @ [filter:tokenauth] on that file cp16net
23:09:57 <hub_cap> there are 2 lnes i think, one key file line at the end of it
23:09:57 <cp16net> yeah i've seen it
23:10:13 <hub_cap> i had hoped to get my merge in and i cant believe i ddint realize when u mentioned yest
23:10:15 <hub_cap> .......
23:10:21 <hub_cap> vipul: sweet dude
23:11:01 <cp16net> hub_cap: yeah i am not sure
23:11:08 <cp16net> we can talk about it later.
23:11:17 <cp16net> i gotta run to my class
23:11:19 <hub_cap> those lines will fix it cp16net, just put them in the reddwarf conf
23:11:24 <hub_cap> lets chat tomorrow cp16net
23:11:29 <cp16net> ok
23:11:31 <cp16net> i'm out
23:11:34 <hub_cap> ok so ive got nothign else to add
23:11:35 <SlickNik> later cp16net
23:11:44 <SlickNik> Oh, I just had another quick note.
23:12:23 <hub_cap> SlickNik: hit us
23:12:27 <SlickNik> I've submitted what I hope should be the final reddwarf patchset to devstack…
23:12:30 <SlickNik> https://review.openstack.org/#/c/17990/
23:12:51 <SlickNik> Hopefully I'll get a couple of reviews and we should be in soon.
23:13:01 <vipul> nice work!
23:13:09 <grapex> Awesome!
23:13:16 <vipul> we shoudl go in there and give some +1s
23:13:17 <hub_cap> didu remove all the apt repo stuff SlickNik?
23:13:37 <SlickNik> Yeah, this incarnation is pretty lean.
23:13:59 <SlickNik> no apt-repo  / no image-building
23:14:13 <hub_cap> very ncie
23:14:15 <hub_cap> *nice
23:14:15 <vipul> taht may still have to live in redstack for now i suppose
23:14:30 <hub_cap> thats fine we shoudl find a way to clean that up too
23:14:58 <SlickNik> Yes we should.
23:15:37 <vipul> we'll have to strip down redstack once this lands..
23:15:52 <hub_cap> yup vrey nice
23:15:55 <hub_cap> *very
23:16:14 <SlickNik> We might want to discuss what we want to do for versioning the guestagent my.cnf, but I'll start that convo in #reddwarf later.
23:16:22 <SlickNik> so that's all I had for now.
23:16:55 <vipul> cool i think we can wrap it up
23:17:26 <juice> hub_cap: it takes < 4 to configure build the image and < 1.5 mins for qemu to pack up the image
23:17:37 <hub_cap> wow not bad juice
23:17:45 <hub_cap> #endmeeting