21:03:33 <hub_cap> #startmeeting reddwarf
21:03:34 <openstack> Meeting started Tue Apr 30 21:03:33 2013 UTC.  The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:03:35 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:03:37 <openstack> The meeting name has been set to 'reddwarf'
21:03:45 <hub_cap> #link https://wiki.openstack.org/wiki/Meetings/RedDwarfMeeting#Agenda_for_the_next_meeting
21:03:55 <datsun180b> refraining from comment as the bot is listening now
21:04:28 <hub_cap> datsun180b: this is always logged, bot or not
21:04:31 <cp16net> the agenda has not been updated
21:04:39 <SlickNik> I, for one, welcome our bot overlords.
21:04:43 <hub_cap> well lets talk about the tc thing
21:04:57 <vipul> nice work on that btw
21:04:58 <datsun180b> further reason to not bring up old wounds
21:05:00 <hub_cap> #topic action items
21:05:09 <hub_cap> it was, interesting vipul
21:05:18 <hub_cap> lets do last wks and then move on to the tc stuff
21:05:30 <hub_cap> vipul: backup api?
21:05:36 <vipul> juice volunteered to take that from me
21:05:37 <SlickNik> okay, sounds good.
21:05:43 <vipul> need to get that in though.. too long
21:06:08 <juice> I have most of it done - will be checked in by tomorrow before vacay
21:06:14 <grapex> So a question about the backup api pull requests
21:06:22 <hub_cap> yup grapex?
21:06:22 <grapex> it's split into two now, would it be easier if it was just one?
21:06:23 <hub_cap> brb
21:06:39 <vipul> i dunno.. that review is 1000 lines..
21:06:39 <juice> what do you mean grapex?
21:06:40 <grapex> As I understand it's in one place right now on GitHub as is, so it might be easier to just submit that
21:06:44 <vipul> the other one is over 2k
21:06:48 <SlickNik> for some definition of easier, perhaps….
21:06:49 <grapex> vipul: I don't mind
21:06:58 <grapex> In fact, I'm having a harder time reviewing them split up
21:07:06 <vipul> i think SlickNik has them dependent on one-another
21:07:16 <grapex> since the functionality is complete it would seem there's no harm in just making one big one
21:07:19 <juice> grapex: I feel ya
21:07:22 <vipul> which is probably the better approach as we try to limit the size of our patches
21:07:48 <imsplitbit> grapex: there is some usefulness into smaller patches tho
21:07:53 <imsplitbit> easier to follow for most
21:08:06 <hub_cap> this is a problem being argued right now on the ML fwiw
21:08:07 <vipul> if you pulled the 'leaf' patch, you'll get both
21:08:13 <grapex> One problem though is patch one contains a lot of additions without the context of how they're used
21:08:26 <juice> what's that hub_cap?
21:08:29 <imsplitbit> 3k lines of code is hard for most people to keep straight in their heads even when they wrote it.
21:08:29 <grapex> vipul: 'leaf' patch?
21:08:33 <juice> what problem?
21:08:55 <grapex> imsplitbit: I don't have to keep it all in my head, I just review a single file at a time and then make sure I understand the core functionality and how it's tested. ;)
21:09:03 <vipul> grapex: i meant the bottom-most one that depends on a parent patch
21:09:05 <esmute> we tried to use the api - taskmanager - guest as the separation when doing patches
21:09:15 <hub_cap> the multi patch set vs 1 big set juice
21:09:29 <juice> I'm confused about our review/patch "policy"
21:09:31 <esmute> hub_cap: what is ML?
21:09:35 <vipul> mailing list lol
21:09:36 <grapex> One problem with multiple ones too is if there's cruft that gets checked in on accident, the reviewer might assume its used in pull request #2
21:09:45 <imsplitbit> you can't commit 1k patch to any other openstack project without getting a few harsh comments about needing to break things up into small consumable pieces
21:09:53 <juice> are patches that get approved and merged expected to be ready for showtime/production?
21:09:56 <vipul> yup agreed imsplitbit
21:10:11 <vipul> juice, i don't think they are for other projects
21:10:29 <hub_cap> juice: for other proejcts its wishy washy i think
21:10:32 <vipul> they should not break anything, and disabled if necessary
21:10:33 <grapex> I know we want to be like OpenStack but let's think of if that makes sense in this situation.
21:10:43 <hub_cap> vipul: well they just freaked out about enabled/disabled on the ml
21:10:44 <imsplitbit> juice: what I was told with the openvz driver was to submit just enough code to start but do nothing else.  then submit a patch that creates a barebones container
21:10:44 <juice> if it's work in progress state but doesn't break anything then that's different
21:10:49 <imsplitbit> then one that sizes it.
21:10:51 <imsplitbit> etc...
21:10:56 <cp16net> well depends on the timing of the review if they are considered to work or not
21:10:56 <juice> gots it
21:11:04 <grapex> juice: I agree- if works in progress break nothing and the added functionality makes sense its cool
21:11:16 <cp16net> at the begining of the release changes are dramatic and usually break things
21:11:23 <cp16net> later they are smoothed out.
21:11:33 <grapex> but in this case most of it seems geared to back ups. I agree in general splitting them up could be helpful but in this case it doesn't buy us much since the one big patch already is known to work
21:11:34 <juice> it seems like everything that has been submitted thus far has the expectation during review that it is ready for primetime
21:12:12 <hub_cap> i thnk we should try to table this for now too.. lets see how it pans out w/ the other openstack projects and try to adopt what they are doing
21:12:19 <grapex> Is being ready for primetime a bad thing? Let's think of what *we* would like to do.
21:12:26 <hub_cap> if we want to merge that into 1 patchset we can now
21:12:35 <hub_cap> well grapex we have the explicit problem of tracking trunk
21:12:37 <imsplitbit> grapex: that is a valid point and we're also small enough of a team to be able to sit down and iron out understanding it but for future pieces we should *probably* be in the habit of doing much smaller patches
21:12:57 <vipul> imsplitbit: +100
21:12:59 <hub_cap> from a non trunk tracking perspective, ie, i download and install stable releases, they dont care if 3 wks in to havana-2 u drop something that doesnt work
21:13:04 <vipul> i'd like to be able to get things in chunks
21:13:11 <vipul> rather than, does everything work and can it be deployed
21:13:18 <hub_cap> as long as by 2013.2 it works
21:13:28 <juice> i am all for small commits that is what I liked about the fork we did
21:13:38 <cp16net> it needs to be testable and passing gates though...
21:13:42 <grapex> vipul: So is the plan smaller chunks could get checked in so long as nothing breaks?
21:13:47 <juice> I would like that process to be in our gated trunk as well
21:13:51 <hub_cap> 2 things
21:13:51 <grapex> Or we'd break the integration tests for short stretches of time.
21:14:05 <SlickNik> I'm all for small commits too.
21:14:07 <hub_cap> 1) passes unit tests, and 2) passes integration tests, and has some of each if applicable
21:14:11 <vipul> grapex: Yea, obviously things should not be breaking, and the API shoudl be disabled possibly if it's not fully ready
21:14:16 <vipul> but no reason for it to not land
21:14:16 <hub_cap> it shoudl always have #1
21:14:21 <grapex> vipul: Ok, I don't disagree
21:14:31 <juice> having said all of that, the review process has to be quicker.
21:14:42 <juice> smaller patches equals less time to develop
21:14:43 <vipul> juice: i think the only way to do that is smaller patches
21:14:45 <juice> (usually)
21:14:48 <imsplitbit> I agree 100%
21:14:54 <hub_cap> ok so moving on?
21:15:00 <imsplitbit> +1
21:15:03 <cp16net> +1
21:15:03 <vipul> w00t
21:15:15 <juice> are we coming back to this at some point in the near future
21:15:16 <SlickNik> move on, revisit.
21:15:16 <esmute> juice: and less time to review
21:15:18 <hub_cap> cp16net have u learn't to speel?
21:15:24 <cp16net> sometimes
21:15:30 <hub_cap> and SlickNik, have u learn't to eat skeetles?
21:15:38 <cp16net> lol
21:15:39 <SlickNik> not yet.
21:15:48 <SlickNik> Work in Progress.
21:15:51 <hub_cap> well put them on the lis for next wk ;) jk
21:15:52 <vipul> #action talk about smaller patches in the near future
21:16:02 <hub_cap> robertmyers: lets chat notifications
21:16:11 <hub_cap> have u updated them to get them inline w/ OpenStack?
21:16:23 <robertmyers> almost
21:16:35 <robertmyers> running into a problem getting availability zone
21:16:42 <robertmyers> and region
21:16:48 <vipul> robertmyers: i didn't see the exists event being emitted... we talked about putting it in contrib.. or did i just miss it
21:16:56 <robertmyers> they don't seem to be available outside of nova
21:17:17 <robertmyers> we handle exists in a separate process
21:17:19 <juice> robertmyers: are you having difficulty finding an api that provides that info?
21:17:27 <robertmyers> there is none
21:17:28 <grapex> vipul: Did we agree on an exists event? Maybe it was one of the days I was gone, to my thinking the way we do it and the way HP does it will probably be too different to be useful.
21:17:35 <hub_cap> we migth nto want to tho, especially if nova handles it in the same process
21:17:42 <hub_cap> grapex: nova emits an exist event
21:17:47 <hub_cap> _in_ its primary source
21:17:49 <grapex> hub_cap: Ok.
21:17:51 <vipul> grapex: we shoudl have a refernce impl
21:17:59 <vipul> maybe contrib is what we said
21:18:13 <hub_cap> id prefer to have something in reddwarf/ if tahts the way OpenStack does it
21:18:21 <vipul> hub_cap: fine by me
21:18:43 <hub_cap> we can start w/ contrib tho if its a can of worms
21:19:20 <vipul> | OS-EXT-AZ:availability_zone         | nova                                                     |
21:19:25 <vipul> robertmyers: ^
21:19:31 <vipul> extension?
21:19:43 <robertmyers> vipul: it doesn't come back in my tests like that
21:19:48 <vipul> saw that via 'nova show'
21:21:00 <vipul> robertmyers: i'd be ok with keeping some things blank if we can't get from API.. we may need to consider adding that info to RD
21:21:40 <hub_cap> robertmyers: maybe the extensions not enabled?
21:21:51 <hub_cap> OS-EXT-AZ needs to be alive for that
21:22:00 <robertmyers> hmm, maybe
21:22:11 <robertmyers> I'm running the tests in 'fake' mode
21:22:24 <robertmyers> or tox that is
21:22:31 <hub_cap> HA
21:22:33 <hub_cap> then there is no nova
21:22:34 <grapex> robertmyers: If we never used it before, we probably didn't emulate it in fake mode. ;)
21:22:42 <hub_cap> exactly grapex
21:22:53 <hub_cap> nova is a dictionary in fake mode, right grapex?
21:22:56 <hub_cap> :P
21:23:06 <cp16net> yea
21:23:11 <robertmyers> okay, well, I'll dig deeper
21:23:13 <grapex> hub_cap: It's a class, actually, but its an easy change
21:23:31 <hub_cap> sure grapex i meant that as a general thought more than a implementation
21:23:42 <SlickNik> I just checked on my devstack instance.
21:23:44 <hub_cap> ok so the last one was to get me back on reddwarf
21:23:45 <SlickNik> OS-EXT-AZ:availability_zone         | nova                                                          |
21:23:46 <hub_cap> and whelp
21:23:55 <SlickNik> So I see an AZ, but no region.
21:23:56 <hub_cap> ill likely be off again to work on the heat / postgres stuff for next wk
21:24:10 <SlickNik> So it might be a fake mode thing, robertmyers.
21:24:19 <hub_cap> so my actions work may fall along the wayside... again......
21:24:22 <robertmyers> my bad
21:24:45 <cp16net> hub_cap: sounds like a recurring theme
21:24:52 <hub_cap> um ya...
21:24:59 <hub_cap> i do what the community needs
21:25:00 <vipul> robertmyers: region should be a config driven thing me things
21:25:01 <vipul> thnks
21:25:01 <hub_cap> not what i want
21:25:07 <SlickNik> well, at least that's a step closer. :)
21:25:09 <hub_cap> vipul: def
21:25:27 <hub_cap> okey. done w/ action items
21:25:30 <imsplitbit> hub_cap: which community? cause we want you to work on reddwarf
21:25:35 <imsplitbit> :-)
21:25:40 <robertmyers> vipul: ok
21:25:41 <hub_cap> lol the openstack community
21:25:48 <hub_cap> ps ive updated the agenda
21:25:53 <hub_cap> for those who dont compulisvely refresh
21:26:00 <hub_cap> and for those who can spell
21:26:04 <imsplitbit> speel
21:26:10 <hub_cap> #topic TC/Incubation
21:26:12 <SlickNik> I think I'm getting an F5  complex...
21:26:20 <hub_cap> HA. theres a plugin for that
21:26:33 <hub_cap> im personally a cmd-r kinda guy
21:26:39 <cp16net> F5?
21:26:43 <hub_cap> its refresh
21:26:47 <imsplitbit> cmd???
21:26:57 <imsplitbit> Wth is that?
21:26:58 <cp16net> OH... on those other machines...
21:26:59 <hub_cap> imsplitbit: ya ya im till on a mac
21:27:04 <hub_cap> *still
21:27:11 <hub_cap> ok so the TC meeting went pretty well
21:27:11 <imsplitbit> hub_cap: do you hear it?
21:27:16 <hub_cap> baby jesus?
21:27:20 <imsplitbit> yes
21:27:24 <imsplitbit> you've made him cry
21:27:27 <hub_cap> HA
21:27:32 <imsplitbit> now tell us of the TC/incubation pls
21:27:35 <vipul> inside joke?
21:27:37 <vipul> lol
21:27:38 <hub_cap> so they want to know 2 things more than anything else
21:28:03 <hub_cap> if heat will require a major overhaul of the API/Impl, and a POC for a diff db in dbaas
21:28:12 <hub_cap> they are also unsure if nosql fits the bill for this
21:28:25 <hub_cap> god if thats the case grapex will be über vindicated
21:28:30 <grapex> lol!
21:28:32 <hub_cap> he will stand and point at all of us
21:28:33 <imsplitbit> haha!
21:28:41 <grapex> I was really glad no one quoted me from the talk about nosql...
21:28:51 <imsplitbit> I think it's all in how the api is structured
21:28:57 <vipul> you may get your wish
21:28:58 <hub_cap> imsplitbit: def
21:29:07 <hub_cap> the main diff w/ the rds stuff is the api _is_ the servcie
21:29:12 <vipul> there is really nothing that i can see that's relationdb specific yet
21:29:22 <hub_cap> our api is a "on behalf of" service that lets u run what u want
21:29:25 <imsplitbit> there's too much overlap for there *not* to be applicable
21:29:49 <hub_cap> ya... the rds nosql thing was brought up, but i think that we can squash that later
21:29:53 <imsplitbit> s/there/it/g
21:29:55 <grapex> Whats kind of funny, it seems like its sort of bad we're already in production before starting these talks, but its also a problem we haven't started on nosql.
21:30:02 <hub_cap> honestly, its probably much easier to do a pgsql impl than say redis currently
21:30:05 <juice> we should do postgres
21:30:23 <hub_cap> juice: im working on the poc already, have the image spun
21:30:31 <juice> nice!
21:30:48 <hub_cap> so im gonna spend a day or two on that, and then investigate heat
21:30:50 <SlickNik> hub_cap: did you hit any major hurdles?
21:30:59 <hub_cap> not yet SlickNik, its just liek the other images :D
21:31:05 <hub_cap> apt-get install postgresql
21:31:07 <vipul> so wrt Heat, I primarily see it as a provisioning thing... and that's it
21:31:12 <cp16net> heat doesnt seem like a big deal
21:31:14 <imsplitbit> hub_cap: I will volunteer my time to help with postgres if needed
21:31:20 <hub_cap> vipul: thats all it is...
21:31:20 <vipul> once it's provisioned, it's managed by Reddwarf after
21:31:22 <hub_cap> imsplitbit: cool
21:31:35 <vipul> basically instead of calling Nova API directly, we call heat api
21:31:38 <cp16net> i see it maybe asa plugable manager like heatmanager rather than taskmanager
21:31:40 <hub_cap> exactly vipul, so i might go crazy and try to POC heat if its not too much work
21:31:52 <hub_cap> we will still need TM tho
21:32:00 <hub_cap> for things like backups etc..
21:32:11 <hub_cap> its more like "dont do the prepare call"
21:32:11 <SlickNik> yeah, I don't think we can use heat to completely rid ourselves of TM.
21:32:15 <vipul> does Heat have an agent?
21:32:22 <hub_cap> i dont think so vipul
21:32:28 <hub_cap> im actuallly pretty durn sure it does not
21:32:30 <SlickNik> vipul: Nope, it's agentless.
21:32:42 <esmute> can that stuff be moved from the TM to the agent?
21:32:44 <cp16net> right... there are lots of things that do not over lap
21:32:46 <vipul> yea so i don't get the duplication of functionality suggestions i heard
21:32:56 <hub_cap> esmute: not sure..
21:32:59 <esmute> maybe we 'can' rid of the TM
21:33:00 <cp16net> wrt heat because other wise we would be a copy of heat
21:33:04 <hub_cap> vipul: mainly areound the initial instrumenting
21:33:06 <juice> hub_cap: i'd like to take a peak to see how manager/dbaas can be better suited to handle that.
21:33:11 <grapex> esmute: Are you saying move TM stuff to the agent?
21:33:13 <hub_cap> esmute: doubtful at present
21:33:22 <juice> seems like dbaas is really mysql-dbaas
21:33:26 <esmute> grapex: Not sure if i should answer that question
21:33:27 <SlickNik> as I see it, we don't have too much overlap today.
21:33:29 <juice> i.e. dbaas.py
21:33:30 <hub_cap> juice: it is now :)
21:33:39 <hub_cap> its about to be fixed
21:33:41 <juice> well done sir
21:33:45 <hub_cap> w/ the postgres impl
21:33:56 <vipul> yea probably easy to genericize
21:33:58 <SlickNik> But in the future, if we plan on implementing things like clustering, the scope for overlap would grow.
21:34:00 <vipul> just haven't had the need to
21:34:03 <hub_cap> SlickNik: def
21:34:09 <hub_cap> so anyhoo, unless someone else has any issues, lets move on
21:34:23 <imsplitbit> +1
21:34:29 <hub_cap> #topic OpenVZ
21:34:31 <hub_cap> imsplitbit: its go time
21:34:47 <imsplitbit> ok I've submitted everything for stackforge
21:34:53 <imsplitbit> waiting on another +2 from core
21:34:57 <imsplitbit> I'll bug them tomorrow
21:35:03 <vipul> a new stackforge proj?
21:35:03 <imsplitbit> I've packaged up the driver
21:35:06 <imsplitbit> yep
21:35:11 <vipul> cool!
21:35:13 <imsplitbit> for the openvz driver
21:35:24 <imsplitbit> the driver is now in a deb package
21:35:28 <cp16net> thats great news imsplitbit :)
21:35:34 <SlickNik> that's awesome!
21:35:41 <imsplitbit> and installs itself in such a way that you *should* be able to use it properly, I'm working on testing it now
21:35:46 <hub_cap> imsplitbit: link?
21:35:55 <imsplitbit> the package isn't in a repo yet
21:36:00 <hub_cap> to the review silly goose
21:36:03 <imsplitbit> I'll link the github for the sources
21:36:05 <imsplitbit> oh
21:36:05 <vipul> the source will be in the repo as well as package?
21:36:07 <imsplitbit> wait one
21:36:21 <imsplitbit> #link https://review.openstack.org/#/c/27421/
21:36:25 <imsplitbit> thats the review
21:36:30 <SlickNik> sweet, thanks
21:36:33 <imsplitbit> vipul: I'll have a ppa for the package
21:36:38 <imsplitbit> and the source is in github
21:36:49 <vipul> i see awesome
21:36:56 <imsplitbit> #link https://github.com/imsplitbit/openvz-nova-driver
21:37:08 <imsplitbit> I spoke with russell from the nova group
21:37:17 <cp16net> imsplitbit: you can add them to the list of reviewers
21:37:21 <cp16net> and they will get an email to review it
21:37:41 <cp16net> do you know who else you need to bother?
21:37:43 <imsplitbit> and they are working toward making everything use a different interface for drivers which will allow a more friendly way of implementing out of tree drivers to nova
21:37:51 * imsplitbit shugs
21:37:55 <imsplitbit> monty taylor
21:37:56 <hub_cap> "working toward" == no one is going to do it
21:38:09 <imsplitbit> well there is a bluprint that no one wants to tackle
21:38:11 <imsplitbit> :-)
21:38:17 <imsplitbit> I *may* try my hand at it
21:38:44 <vipul> good stuff..
21:38:45 <imsplitbit> it's not going to be super easy but it may be worth the contribution to get more influence to get the driver in nova proper
21:38:52 <hub_cap> de
21:38:53 <hub_cap> f
21:38:53 <SlickNik> I need a lesson in OpenStack sp33k...
21:39:00 <vipul> you're going to set up a test that patches devstack install to run openvz?
21:39:32 <imsplitbit> vipul: that would be the next step yes.  I'd have to extend devstack with a script that adds the ppa and package as a dep
21:39:37 <imsplitbit> but it *should* be do-able
21:39:43 <hub_cap> imsplitbit: i added monty to the list of reviewers
21:39:53 <vipul> cool
21:40:02 <imsplitbit> right now I'm supposed to package it and make sure it's available for people to use and then I have to move onto something else for a bit
21:40:05 <imsplitbit> but I'll keep up with it
21:40:06 <SlickNik> imsplitbit: that would be neat..
21:40:06 <cp16net> nice
21:40:17 <vipul> ebay dudes were interested
21:40:35 <imsplitbit> I have a contact at disney that is also interested in it
21:40:46 <vipul> at least a few of us now are .. so maybe that'll be a forcing factor.. maybe..?
21:40:52 <hub_cap> hmm imsplitbit i see a extra newline in a file
21:40:56 <hub_cap> not sure if i should comment on it or not
21:41:18 <hub_cap> https://review.openstack.org/#/c/27421/4/modules/openstack_project/files/zuul/layout.yaml (2 newlines)
21:41:21 <imsplitbit> is there? crap I cleaned most of that out.  which file?
21:41:23 <imsplitbit> hmmm
21:41:26 <imsplitbit> ok let me fix that up
21:41:29 <imsplitbit> don't comment on it
21:41:39 <hub_cap> i wont ;)
21:41:44 <hub_cap> but push will still nuke it
21:41:52 <hub_cap> nuke everyones +2's
21:41:54 <imsplitbit> yeah I'm gonna leave it
21:41:58 <hub_cap> HA u should fix
21:41:59 <imsplitbit> I've got 2 +2s
21:42:04 <imsplitbit> :-)
21:42:07 <vipul> don't mess with a good thing
21:42:09 <cp16net> lol
21:42:12 <imsplitbit> I will, after
21:42:12 <hub_cap> they will re +2 it
21:42:14 <imsplitbit> :-)
21:42:14 <cp16net> i know that feeling
21:42:20 <hub_cap> imsplitbit: bad bad bad
21:42:21 <SlickNik> lolol
21:42:26 <cp16net> and so will jenkins
21:42:28 <cp16net> :-P
21:42:36 <hub_cap> speaking of jenkins
21:42:37 <imsplitbit> thats all I got on openvz
21:42:41 <hub_cap> #topic jenkins
21:42:46 <hub_cap> SlickNik: do tell
21:42:48 <vipul> thanks imsplitbit
21:42:56 <SlickNik> So, Matty worked on a fix.
21:43:11 <SlickNik> But the fix isn't working as expected.
21:43:44 <SlickNik> It's still clobbering the gerrit env during consecutive runs.
21:43:59 <vipul> is it consecutive or concurrent only
21:44:16 <vipul> we could dial down the concurrent runs if needed
21:44:17 <SlickNik> The fix that he has now is busted for concurrent.
21:44:26 <hub_cap> heh. ok so its being worked on? matty should join #reddwarf to keep us updated ;)
21:44:41 <vipul> he promised to pull an all-nighter if necessary :D
21:44:41 <SlickNik> So it should work if I reduce the number of executors back to 1.
21:44:50 <hub_cap> ok lets at least do that for now SlickNik
21:44:53 <SlickNik> But we don't want to have to do that.
21:44:54 <hub_cap> we need to push some code
21:44:54 <vipul> need to get him on irc
21:45:03 <hub_cap> SlickNik: 1 > 0
21:45:08 <SlickNik> yeah, I'm gonna switch back to 1 for now.
21:45:09 <hub_cap> and thats what we have currently
21:45:12 <SlickNik> agreed hub_cap.
21:45:18 <hub_cap> sweet. <3
21:45:21 <hub_cap> lol sweetheart
21:45:44 <SlickNik> err… don't forget the space… :)
21:46:14 <SlickNik> #action slicknik to switch back executors to 1 and then back up to 2 when the VM plugin is fixed.
21:46:23 <hub_cap> okey movin on
21:46:26 <hub_cap> #topic Backups
21:46:29 <kagan> hold on, 1>0 but sweet < 3 ??
21:46:38 * hub_cap mind is blown
21:46:51 <SlickNik> wow
21:46:51 <hub_cap> nice one kagan
21:47:00 <vipul> someone's paying attention
21:47:10 <hub_cap> so i guess all thats needed w/ backups is it to pass some tests eh?
21:47:16 <hub_cap> well do we have said tests?
21:47:25 <hub_cap> integration i mean
21:47:26 <SlickNik> I'm working on said tests.
21:47:47 <vipul> SlickNik: the enable swift thing got merged, so that should be one less issue
21:47:50 <SlickNik> But I hit an issue where we don't have the root password on the restored instance to be able to connect to it.
21:48:16 <hub_cap> cuz it overwrites it eh?
21:48:27 <hub_cap> u mean for the osadmin user?
21:48:28 <kagan> yep, prepare()
21:48:44 <vipul> err maybe it didn't.. grapex only +2'd no approval: https://review.openstack.org/#/c/27291/
21:48:45 <SlickNik> either root or os_admin.
21:48:50 <hub_cap> see this will be magically solved w/ heat /rimshot
21:49:19 <juice> oh really?
21:49:38 <hub_cap> vipul: id like to see the jobs pass first
21:49:39 <grapex> vipul: hub_cap said there was an issue
21:49:41 <hub_cap> then ill approve it
21:49:42 <SlickNik> root@localhost gets a random password when the db is initialized with secure()
21:50:07 <vipul> let me see if our jenkins has cycles lol.. i'll re kick it
21:50:10 <SlickNik> and os_admin password is stored in the my.cnf which is not part of the restored db.
21:50:20 <hub_cap> do the integration tests run against stock mysql or percona on yalls jenkins? (kinda random)
21:50:35 <esp> think it just does mysql right now
21:50:44 <esp> need to do percona too soon
21:50:45 <vipul> stock
21:51:00 <hub_cap> cool. ya i bet once we go vm gate, we could do both a bit easier
21:51:04 <SlickNik> hub_cap / vipul: I think there might be an issue with the int tests that we need to look at soon. I don't think they're running clean right now.
21:51:16 <hub_cap> hmm really?
21:51:30 <hub_cap> can someone do a fresh pull on a fresh vm and check em out?
21:51:34 <hub_cap> any takers?
21:51:41 <esp> sure
21:51:52 <hub_cap> <3 esp
21:52:29 <hub_cap> /offtopic by the way, <3 is the phrase "less than or butt", kinda like less than or equal
21:52:31 <SlickNik> test_instance_created seems to be failing consistently, not sure why.
21:52:41 <hub_cap> SlickNik: ok we need to investigate for sure then
21:52:49 <hub_cap> if thats the case, raise it in #reddwarf
21:52:52 <hub_cap> esp: ^ ^
21:53:11 <esp> k
21:53:14 <vipul> kicked off another run for the swift patch
21:53:15 <hub_cap> so moving on?
21:53:15 <SlickNik> yes, we need to look into that asap.
21:53:20 <esp> I will know in a bit.
21:53:22 <imsplitbit> +1
21:53:23 <hub_cap> vipul: cool
21:53:27 <hub_cap> #topic Notifications
21:53:36 <hub_cap> so lets spend a sec talking about exists
21:53:39 <SlickNik> yup, will keep you guys appraised of the progress with backups.
21:54:04 <robertmyers> we need to add a periodic task
21:54:05 <hub_cap> for exist event, id really like to see us just copy what nova does
21:54:11 <hub_cap> wrt periodic tasks too
21:54:23 <hub_cap> there is a update to oslo that changes the timers for periodics
21:54:27 <robertmyers> i can dig into it
21:54:30 <hub_cap> it used to set it off the _end_ of the previous task
21:54:42 <hub_cap> but it will set it off the _begin_ of the previouos task
21:54:51 <hub_cap> so it keeps the timing ~consistant
21:54:54 <vipul> need to refresh oslo then
21:55:02 <hub_cap> vipul: i think its on the list, ehh juice?
21:55:13 <vipul> did juice get periodic_task with his change?
21:55:20 <SlickNik> I'm not sure if juice is pulling in periodic_task changes...
21:55:22 <SlickNik> juice?
21:55:26 <robertmyers> how should we handle running multiple nodes?
21:55:33 <juice> i'll have to see if it was one of the changes
21:55:41 <hub_cap> #link https://review.openstack.org/#/c/26448/
21:55:42 <juice> it may not have been pulled int
21:55:44 <juice> in
21:55:58 <hub_cap> robertmyers: lets look @ how nova does it, or if it does..
21:56:07 <hub_cap> they might just disable it on any _other_ nodes
21:56:08 <vipul> nothing obvious here: https://github.com/openstack/nova/tree/master/bin
21:56:22 <vipul> not sure what the thing that runs periodically is
21:56:37 <juice> I did not do a complete refresh of oslo just what was dependent on notify
21:56:46 <cp16net> the periodic for the computes is a bin process
21:56:57 <cp16net> you cron it to run when you want
21:57:09 <vipul> cp16net is it one of files in nova/bin?
21:57:14 <cp16net> should be
21:57:19 <vipul> know which one?
21:58:16 <cp16net> i dont see it
21:58:16 <juice> the looping call was not pulled it - should I update the patch to include it?
21:58:50 <vipul> juice: do it
21:59:15 <cp16net> well i remember seeing it a while back
21:59:17 <juice> k - I think robertmyers had a play list request for the patch as well
21:59:19 <SlickNik> juice: I think that's a good idea
21:59:29 <cp16net> maybe that changed a little
21:59:33 <juice> as soon as the gate is working again :)
21:59:41 <hub_cap> #link https://github.com/openstack/nova/blob/master/nova/compute/utils.py#L181
22:00:25 <vipul> hub_cap: nice that's a good starting point
22:01:25 <hub_cap> moving on?
22:01:38 <SlickNik> sounds good by me.
22:01:44 <imsplitbit> +1
22:01:46 <hub_cap> #topic RootWrap
22:02:26 <SlickNik> That's just been on the back burner.
22:02:30 <hub_cap> okey moving on?
22:02:35 <vipul> yep
22:02:39 <imsplitbit> +1
22:02:45 <SlickNik> Don't think anyone is actively looking at it?
22:02:52 <hub_cap> we arent @rax
22:03:01 <SlickNik> same here @hp.
22:03:01 <vipul> it's a nicety
22:03:10 <vipul> not critical
22:03:26 <hub_cap> #Quotas XML
22:03:29 <hub_cap> lol
22:03:33 <hub_cap> #topic Quotas XML
22:03:38 <hub_cap> i think this is from last wk
22:03:39 <grapex> Seems like that was fixed long ago. :)
22:03:40 <hub_cap> did we clear it up?
22:03:40 <SlickNik> I thought we had this working.
22:03:49 <grapex> Yep, the notes are just really old.
22:03:53 <SlickNik> Yeah, esmute / esp cleared this up.
22:04:01 <SlickNik> If I recall correctly.
22:04:09 <hub_cap> ok removed
22:04:10 <esmute> yup
22:04:10 <esp> I think I squashed the last xml quota bug
22:04:31 <SlickNik> good job guys. :)
22:04:31 <grapex> The whole hostname client thing is probably overcome by events as well.
22:04:43 <hub_cap> i nuked it too grapex
22:04:51 <hub_cap> #topic action/events
22:05:06 <SlickNik> That's all you, hub_cap
22:05:15 <hub_cap> ive got the code done honestly, and it works for all the complex instance based events
22:05:24 <hub_cap> its not working yet for databases/users
22:05:30 <hub_cap> but i figured that could come a bit later
22:05:41 <hub_cap> i need to write tests... ive verified it manually at present
22:05:47 <hub_cap> but im OBE
22:05:53 <SlickNik> OBE?
22:06:15 <hub_cap> overcome by events
22:06:17 <SlickNik> wan kenobi?
22:06:22 <SlickNik> ah, okay
22:06:23 <hub_cap> lol
22:06:26 <hub_cap> sure that too
22:06:36 <hub_cap> im sooooo gonna die
22:06:43 <hub_cap> oooops star wars spoiler
22:06:54 <SlickNik> Don't let the heat get to you… :)
22:06:59 <hub_cap> HA
22:07:12 <hub_cap> if someone wnats to pick it up to finish it, ill hand it off
22:07:15 <grapex> SlickNik: The heat is on.
22:07:19 <hub_cap> otherwise ill get to it prolly like in 2 wks
22:07:22 <hub_cap> err in 1 wk
22:07:48 <vipul> crickets
22:07:50 <SlickNik> lol@grapex.
22:07:52 <hub_cap> grapex: queue eddie murphy
22:08:02 <hub_cap> lol vipul
22:08:05 <hub_cap> ok so
22:08:16 <hub_cap> #topic open discussion
22:08:23 <grapex> XmlLint update! I got the code finished but found out there are some bugs.
22:08:27 <hub_cap> anyone have anything to add to this wonderfully off topic meeting
22:08:28 <cp16net> btw the periodic usage events are fired off by a periodic task when compute is started up
22:08:29 <cp16net> #link https://github.com/openstack/nova/blob/master/nova/utils.py#L74
22:08:34 <grapex> Just two though, when you request versions that aren't there.
22:08:55 <vipul> woah cp16net good find
22:08:57 <cp16net> #link https://wiki.openstack.org/wiki/NotificationEventExamples
22:09:11 <hub_cap> thx cp16net
22:09:30 <SlickNik> grapex: Do the int tests for test_request_bogus_version and test_no_slash_with_version need to be changed?
22:09:44 <SlickNik> Looks like they have been passing (when rdjenkins was still working well)…
22:09:58 <hub_cap> wow cp16net grapex they did a lot of work to make sure usage works https://github.com/openstack/nova/blob/master/nova/utils.py#L370
22:10:12 <grapex> SlickNik: maybe added to. The problem is we don't return an XML element or JSON table if the version isn't found.
22:10:20 <grapex> I have a feeling that bug is going to be a royal pain...
22:10:26 <hub_cap> lol i misspoke https://github.com/openstack/nova/blob/master/nova/utils.py#L421
22:10:46 <hub_cap> grapex: have u added bugs for those problemos?
22:10:58 <robertmyers> #link https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L3600
22:11:03 <grapex> SlickNik: Also, this may not be clear, but how the XML lint stuff works is it gets called every single time the client makes a request and gets a response, no matter what. So it runs for the versions tests even though we don't check the request / response body normally.
22:11:17 <grapex> hub_cap: I can... its super minor. No one likes the poor versions API honestly.
22:11:21 <hub_cap> sweet robertmyers our job is done :)
22:11:23 <grapex> I guess I will just for the heck of it
22:11:24 <vipul> robertmyers: line 3600!
22:11:30 <hub_cap> grapex: exactly
22:11:32 <robertmyers> diggin deep
22:11:36 <hub_cap> let someone who cares handle it
22:11:41 <cp16net> yup if CONF.instance_uage_audit
22:11:46 <hub_cap> robertmyers: more like OMG WHY IS THIS SO LARGE OF A FILE
22:12:00 <cp16net> liol
22:12:07 <SlickNik> ah, grapex I see. Thanks for the clarifications.
22:12:13 <vipul> we need an extensions api... just remembered
22:12:14 <grapex> hub_cap: Because its pythonic to put multiple classes in one file and make them huge?
22:12:25 <vipul> i think other openstack proj list extensions available?
22:12:33 <cp16net> oh....
22:12:35 <cp16net> #link https://blueprints.launchpad.net/nova/+spec/libvirt-exists-support
22:12:37 <hub_cap> vipul: good call. we neeed to fix that
22:13:07 <hub_cap> heh cp16net... we almost didnt have to do any work ;)
22:13:22 <cp16net> heh
22:13:24 <hub_cap> is there a bp... i know we talked about it before vipul
22:13:30 <vipul> don't think so
22:14:24 <hub_cap> #link https://blueprints.launchpad.net/reddwarf/+spec/extensions-update
22:15:01 <robertmyers> stevedore
22:15:08 <robertmyers> done
22:15:08 <SlickNik> I think this involves another update from oslo…
22:15:30 <hub_cap> def robertmyers
22:15:34 <hub_cap> thats the plan
22:16:08 <SlickNik> +1 to stevedore
22:16:29 <hub_cap> SlickNik: i know we will have to remove the openstack common extensions stuff
22:16:49 <hub_cap> ok well weve got taht under control now
22:16:52 <hub_cap> anything else?
22:16:56 <hub_cap> we are 15 over
22:17:00 <vipul> i'm good ... good meet
22:17:04 <hub_cap> word
22:17:17 <SlickNik> I'm good as well.
22:17:31 <SlickNik> Thanks all.
22:17:36 <imsplitbit> pees!
22:18:02 <SlickNik> hub_cap, we can have the extensions discussion offline in #reddwarf.
22:18:07 <hub_cap> imsplitbit: now thats just potty humor /rimshot
22:18:13 <hub_cap> def SlickNik
22:18:16 <hub_cap> #endmeeting