21:03:33 #startmeeting reddwarf 21:03:34 Meeting started Tue Apr 30 21:03:33 2013 UTC. The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:03:35 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:03:37 The meeting name has been set to 'reddwarf' 21:03:45 #link https://wiki.openstack.org/wiki/Meetings/RedDwarfMeeting#Agenda_for_the_next_meeting 21:03:55 refraining from comment as the bot is listening now 21:04:28 datsun180b: this is always logged, bot or not 21:04:31 the agenda has not been updated 21:04:39 I, for one, welcome our bot overlords. 21:04:43 well lets talk about the tc thing 21:04:57 nice work on that btw 21:04:58 further reason to not bring up old wounds 21:05:00 #topic action items 21:05:09 it was, interesting vipul 21:05:18 lets do last wks and then move on to the tc stuff 21:05:30 vipul: backup api? 21:05:36 juice volunteered to take that from me 21:05:37 okay, sounds good. 21:05:43 need to get that in though.. too long 21:06:08 I have most of it done - will be checked in by tomorrow before vacay 21:06:14 So a question about the backup api pull requests 21:06:22 yup grapex? 21:06:22 it's split into two now, would it be easier if it was just one? 21:06:23 brb 21:06:39 i dunno.. that review is 1000 lines.. 21:06:39 what do you mean grapex? 21:06:40 As I understand it's in one place right now on GitHub as is, so it might be easier to just submit that 21:06:44 the other one is over 2k 21:06:48 for some definition of easier, perhaps…. 21:06:49 vipul: I don't mind 21:06:58 In fact, I'm having a harder time reviewing them split up 21:07:06 i think SlickNik has them dependent on one-another 21:07:16 since the functionality is complete it would seem there's no harm in just making one big one 21:07:19 grapex: I feel ya 21:07:22 which is probably the better approach as we try to limit the size of our patches 21:07:48 grapex: there is some usefulness into smaller patches tho 21:07:53 easier to follow for most 21:08:06 this is a problem being argued right now on the ML fwiw 21:08:07 if you pulled the 'leaf' patch, you'll get both 21:08:13 One problem though is patch one contains a lot of additions without the context of how they're used 21:08:26 what's that hub_cap? 21:08:29 3k lines of code is hard for most people to keep straight in their heads even when they wrote it. 21:08:29 vipul: 'leaf' patch? 21:08:33 what problem? 21:08:55 imsplitbit: I don't have to keep it all in my head, I just review a single file at a time and then make sure I understand the core functionality and how it's tested. ;) 21:09:03 grapex: i meant the bottom-most one that depends on a parent patch 21:09:05 we tried to use the api - taskmanager - guest as the separation when doing patches 21:09:15 the multi patch set vs 1 big set juice 21:09:29 I'm confused about our review/patch "policy" 21:09:31 hub_cap: what is ML? 21:09:35 mailing list lol 21:09:36 One problem with multiple ones too is if there's cruft that gets checked in on accident, the reviewer might assume its used in pull request #2 21:09:45 you can't commit 1k patch to any other openstack project without getting a few harsh comments about needing to break things up into small consumable pieces 21:09:53 are patches that get approved and merged expected to be ready for showtime/production? 21:09:56 yup agreed imsplitbit 21:10:11 juice, i don't think they are for other projects 21:10:29 juice: for other proejcts its wishy washy i think 21:10:32 they should not break anything, and disabled if necessary 21:10:33 I know we want to be like OpenStack but let's think of if that makes sense in this situation. 21:10:43 vipul: well they just freaked out about enabled/disabled on the ml 21:10:44 juice: what I was told with the openvz driver was to submit just enough code to start but do nothing else. then submit a patch that creates a barebones container 21:10:44 if it's work in progress state but doesn't break anything then that's different 21:10:49 then one that sizes it. 21:10:51 etc... 21:10:56 well depends on the timing of the review if they are considered to work or not 21:10:56 gots it 21:11:04 juice: I agree- if works in progress break nothing and the added functionality makes sense its cool 21:11:16 at the begining of the release changes are dramatic and usually break things 21:11:23 later they are smoothed out. 21:11:33 but in this case most of it seems geared to back ups. I agree in general splitting them up could be helpful but in this case it doesn't buy us much since the one big patch already is known to work 21:11:34 it seems like everything that has been submitted thus far has the expectation during review that it is ready for primetime 21:12:12 i thnk we should try to table this for now too.. lets see how it pans out w/ the other openstack projects and try to adopt what they are doing 21:12:19 Is being ready for primetime a bad thing? Let's think of what *we* would like to do. 21:12:26 if we want to merge that into 1 patchset we can now 21:12:35 well grapex we have the explicit problem of tracking trunk 21:12:37 grapex: that is a valid point and we're also small enough of a team to be able to sit down and iron out understanding it but for future pieces we should *probably* be in the habit of doing much smaller patches 21:12:57 imsplitbit: +100 21:12:59 from a non trunk tracking perspective, ie, i download and install stable releases, they dont care if 3 wks in to havana-2 u drop something that doesnt work 21:13:04 i'd like to be able to get things in chunks 21:13:11 rather than, does everything work and can it be deployed 21:13:18 as long as by 2013.2 it works 21:13:28 i am all for small commits that is what I liked about the fork we did 21:13:38 it needs to be testable and passing gates though... 21:13:42 vipul: So is the plan smaller chunks could get checked in so long as nothing breaks? 21:13:47 I would like that process to be in our gated trunk as well 21:13:51 2 things 21:13:51 Or we'd break the integration tests for short stretches of time. 21:14:05 I'm all for small commits too. 21:14:07 1) passes unit tests, and 2) passes integration tests, and has some of each if applicable 21:14:11 grapex: Yea, obviously things should not be breaking, and the API shoudl be disabled possibly if it's not fully ready 21:14:16 but no reason for it to not land 21:14:16 it shoudl always have #1 21:14:21 vipul: Ok, I don't disagree 21:14:31 having said all of that, the review process has to be quicker. 21:14:42 smaller patches equals less time to develop 21:14:43 juice: i think the only way to do that is smaller patches 21:14:45 (usually) 21:14:48 I agree 100% 21:14:54 ok so moving on? 21:15:00 +1 21:15:03 +1 21:15:03 w00t 21:15:15 are we coming back to this at some point in the near future 21:15:16 move on, revisit. 21:15:16 juice: and less time to review 21:15:18 cp16net have u learn't to speel? 21:15:24 sometimes 21:15:30 and SlickNik, have u learn't to eat skeetles? 21:15:38 lol 21:15:39 not yet. 21:15:48 Work in Progress. 21:15:51 well put them on the lis for next wk ;) jk 21:15:52 #action talk about smaller patches in the near future 21:16:02 robertmyers: lets chat notifications 21:16:11 have u updated them to get them inline w/ OpenStack? 21:16:23 almost 21:16:35 running into a problem getting availability zone 21:16:42 and region 21:16:48 robertmyers: i didn't see the exists event being emitted... we talked about putting it in contrib.. or did i just miss it 21:16:56 they don't seem to be available outside of nova 21:17:17 we handle exists in a separate process 21:17:19 robertmyers: are you having difficulty finding an api that provides that info? 21:17:27 there is none 21:17:28 vipul: Did we agree on an exists event? Maybe it was one of the days I was gone, to my thinking the way we do it and the way HP does it will probably be too different to be useful. 21:17:35 we migth nto want to tho, especially if nova handles it in the same process 21:17:42 grapex: nova emits an exist event 21:17:47 _in_ its primary source 21:17:49 hub_cap: Ok. 21:17:51 grapex: we shoudl have a refernce impl 21:17:59 maybe contrib is what we said 21:18:13 id prefer to have something in reddwarf/ if tahts the way OpenStack does it 21:18:21 hub_cap: fine by me 21:18:43 we can start w/ contrib tho if its a can of worms 21:19:20 | OS-EXT-AZ:availability_zone | nova | 21:19:25 robertmyers: ^ 21:19:31 extension? 21:19:43 vipul: it doesn't come back in my tests like that 21:19:48 saw that via 'nova show' 21:21:00 robertmyers: i'd be ok with keeping some things blank if we can't get from API.. we may need to consider adding that info to RD 21:21:40 robertmyers: maybe the extensions not enabled? 21:21:51 OS-EXT-AZ needs to be alive for that 21:22:00 hmm, maybe 21:22:11 I'm running the tests in 'fake' mode 21:22:24 or tox that is 21:22:31 HA 21:22:33 then there is no nova 21:22:34 robertmyers: If we never used it before, we probably didn't emulate it in fake mode. ;) 21:22:42 exactly grapex 21:22:53 nova is a dictionary in fake mode, right grapex? 21:22:56 :P 21:23:06 yea 21:23:11 okay, well, I'll dig deeper 21:23:13 hub_cap: It's a class, actually, but its an easy change 21:23:31 sure grapex i meant that as a general thought more than a implementation 21:23:42 I just checked on my devstack instance. 21:23:44 ok so the last one was to get me back on reddwarf 21:23:45 OS-EXT-AZ:availability_zone | nova | 21:23:46 and whelp 21:23:55 So I see an AZ, but no region. 21:23:56 ill likely be off again to work on the heat / postgres stuff for next wk 21:24:10 So it might be a fake mode thing, robertmyers. 21:24:19 so my actions work may fall along the wayside... again...... 21:24:22 my bad 21:24:45 hub_cap: sounds like a recurring theme 21:24:52 um ya... 21:24:59 i do what the community needs 21:25:00 robertmyers: region should be a config driven thing me things 21:25:01 thnks 21:25:01 not what i want 21:25:07 well, at least that's a step closer. :) 21:25:09 vipul: def 21:25:27 okey. done w/ action items 21:25:30 hub_cap: which community? cause we want you to work on reddwarf 21:25:35 :-) 21:25:40 vipul: ok 21:25:41 lol the openstack community 21:25:48 ps ive updated the agenda 21:25:53 for those who dont compulisvely refresh 21:26:00 and for those who can spell 21:26:04 speel 21:26:10 #topic TC/Incubation 21:26:12 I think I'm getting an F5 complex... 21:26:20 HA. theres a plugin for that 21:26:33 im personally a cmd-r kinda guy 21:26:39 F5? 21:26:43 its refresh 21:26:47 cmd??? 21:26:57 Wth is that? 21:26:58 OH... on those other machines... 21:26:59 imsplitbit: ya ya im till on a mac 21:27:04 *still 21:27:11 ok so the TC meeting went pretty well 21:27:11 hub_cap: do you hear it? 21:27:16 baby jesus? 21:27:20 yes 21:27:24 you've made him cry 21:27:27 HA 21:27:32 now tell us of the TC/incubation pls 21:27:35 inside joke? 21:27:37 lol 21:27:38 so they want to know 2 things more than anything else 21:28:03 if heat will require a major overhaul of the API/Impl, and a POC for a diff db in dbaas 21:28:12 they are also unsure if nosql fits the bill for this 21:28:25 god if thats the case grapex will be über vindicated 21:28:30 lol! 21:28:32 he will stand and point at all of us 21:28:33 haha! 21:28:41 I was really glad no one quoted me from the talk about nosql... 21:28:51 I think it's all in how the api is structured 21:28:57 you may get your wish 21:28:58 imsplitbit: def 21:29:07 the main diff w/ the rds stuff is the api _is_ the servcie 21:29:12 there is really nothing that i can see that's relationdb specific yet 21:29:22 our api is a "on behalf of" service that lets u run what u want 21:29:25 there's too much overlap for there *not* to be applicable 21:29:49 ya... the rds nosql thing was brought up, but i think that we can squash that later 21:29:53 s/there/it/g 21:29:55 Whats kind of funny, it seems like its sort of bad we're already in production before starting these talks, but its also a problem we haven't started on nosql. 21:30:02 honestly, its probably much easier to do a pgsql impl than say redis currently 21:30:05 we should do postgres 21:30:23 juice: im working on the poc already, have the image spun 21:30:31 nice! 21:30:48 so im gonna spend a day or two on that, and then investigate heat 21:30:50 hub_cap: did you hit any major hurdles? 21:30:59 not yet SlickNik, its just liek the other images :D 21:31:05 apt-get install postgresql 21:31:07 so wrt Heat, I primarily see it as a provisioning thing... and that's it 21:31:12 heat doesnt seem like a big deal 21:31:14 hub_cap: I will volunteer my time to help with postgres if needed 21:31:20 vipul: thats all it is... 21:31:20 once it's provisioned, it's managed by Reddwarf after 21:31:22 imsplitbit: cool 21:31:35 basically instead of calling Nova API directly, we call heat api 21:31:38 i see it maybe asa plugable manager like heatmanager rather than taskmanager 21:31:40 exactly vipul, so i might go crazy and try to POC heat if its not too much work 21:31:52 we will still need TM tho 21:32:00 for things like backups etc.. 21:32:11 its more like "dont do the prepare call" 21:32:11 yeah, I don't think we can use heat to completely rid ourselves of TM. 21:32:15 does Heat have an agent? 21:32:22 i dont think so vipul 21:32:28 im actuallly pretty durn sure it does not 21:32:30 vipul: Nope, it's agentless. 21:32:42 can that stuff be moved from the TM to the agent? 21:32:44 right... there are lots of things that do not over lap 21:32:46 yea so i don't get the duplication of functionality suggestions i heard 21:32:56 esmute: not sure.. 21:32:59 maybe we 'can' rid of the TM 21:33:00 wrt heat because other wise we would be a copy of heat 21:33:04 vipul: mainly areound the initial instrumenting 21:33:06 hub_cap: i'd like to take a peak to see how manager/dbaas can be better suited to handle that. 21:33:11 esmute: Are you saying move TM stuff to the agent? 21:33:13 esmute: doubtful at present 21:33:22 seems like dbaas is really mysql-dbaas 21:33:26 grapex: Not sure if i should answer that question 21:33:27 as I see it, we don't have too much overlap today. 21:33:29 i.e. dbaas.py 21:33:30 juice: it is now :) 21:33:39 its about to be fixed 21:33:41 well done sir 21:33:45 w/ the postgres impl 21:33:56 yea probably easy to genericize 21:33:58 But in the future, if we plan on implementing things like clustering, the scope for overlap would grow. 21:34:00 just haven't had the need to 21:34:03 SlickNik: def 21:34:09 so anyhoo, unless someone else has any issues, lets move on 21:34:23 +1 21:34:29 #topic OpenVZ 21:34:31 imsplitbit: its go time 21:34:47 ok I've submitted everything for stackforge 21:34:53 waiting on another +2 from core 21:34:57 I'll bug them tomorrow 21:35:03 a new stackforge proj? 21:35:03 I've packaged up the driver 21:35:06 yep 21:35:11 cool! 21:35:13 for the openvz driver 21:35:24 the driver is now in a deb package 21:35:28 thats great news imsplitbit :) 21:35:34 that's awesome! 21:35:41 and installs itself in such a way that you *should* be able to use it properly, I'm working on testing it now 21:35:46 imsplitbit: link? 21:35:55 the package isn't in a repo yet 21:36:00 to the review silly goose 21:36:03 I'll link the github for the sources 21:36:05 oh 21:36:05 the source will be in the repo as well as package? 21:36:07 wait one 21:36:21 #link https://review.openstack.org/#/c/27421/ 21:36:25 thats the review 21:36:30 sweet, thanks 21:36:33 vipul: I'll have a ppa for the package 21:36:38 and the source is in github 21:36:49 i see awesome 21:36:56 #link https://github.com/imsplitbit/openvz-nova-driver 21:37:08 I spoke with russell from the nova group 21:37:17 imsplitbit: you can add them to the list of reviewers 21:37:21 and they will get an email to review it 21:37:41 do you know who else you need to bother? 21:37:43 and they are working toward making everything use a different interface for drivers which will allow a more friendly way of implementing out of tree drivers to nova 21:37:51 * imsplitbit shugs 21:37:55 monty taylor 21:37:56 "working toward" == no one is going to do it 21:38:09 well there is a bluprint that no one wants to tackle 21:38:11 :-) 21:38:17 I *may* try my hand at it 21:38:44 good stuff.. 21:38:45 it's not going to be super easy but it may be worth the contribution to get more influence to get the driver in nova proper 21:38:52 de 21:38:53 f 21:38:53 I need a lesson in OpenStack sp33k... 21:39:00 you're going to set up a test that patches devstack install to run openvz? 21:39:32 vipul: that would be the next step yes. I'd have to extend devstack with a script that adds the ppa and package as a dep 21:39:37 but it *should* be do-able 21:39:43 imsplitbit: i added monty to the list of reviewers 21:39:53 cool 21:40:02 right now I'm supposed to package it and make sure it's available for people to use and then I have to move onto something else for a bit 21:40:05 but I'll keep up with it 21:40:06 imsplitbit: that would be neat.. 21:40:06 nice 21:40:17 ebay dudes were interested 21:40:35 I have a contact at disney that is also interested in it 21:40:46 at least a few of us now are .. so maybe that'll be a forcing factor.. maybe..? 21:40:52 hmm imsplitbit i see a extra newline in a file 21:40:56 not sure if i should comment on it or not 21:41:18 https://review.openstack.org/#/c/27421/4/modules/openstack_project/files/zuul/layout.yaml (2 newlines) 21:41:21 is there? crap I cleaned most of that out. which file? 21:41:23 hmmm 21:41:26 ok let me fix that up 21:41:29 don't comment on it 21:41:39 i wont ;) 21:41:44 but push will still nuke it 21:41:52 nuke everyones +2's 21:41:54 yeah I'm gonna leave it 21:41:58 HA u should fix 21:41:59 I've got 2 +2s 21:42:04 :-) 21:42:07 don't mess with a good thing 21:42:09 lol 21:42:12 I will, after 21:42:12 they will re +2 it 21:42:14 :-) 21:42:14 i know that feeling 21:42:20 imsplitbit: bad bad bad 21:42:21 lolol 21:42:26 and so will jenkins 21:42:28 :-P 21:42:36 speaking of jenkins 21:42:37 thats all I got on openvz 21:42:41 #topic jenkins 21:42:46 SlickNik: do tell 21:42:48 thanks imsplitbit 21:42:56 So, Matty worked on a fix. 21:43:11 But the fix isn't working as expected. 21:43:44 It's still clobbering the gerrit env during consecutive runs. 21:43:59 is it consecutive or concurrent only 21:44:16 we could dial down the concurrent runs if needed 21:44:17 The fix that he has now is busted for concurrent. 21:44:26 heh. ok so its being worked on? matty should join #reddwarf to keep us updated ;) 21:44:41 he promised to pull an all-nighter if necessary :D 21:44:41 So it should work if I reduce the number of executors back to 1. 21:44:50 ok lets at least do that for now SlickNik 21:44:53 But we don't want to have to do that. 21:44:54 we need to push some code 21:44:54 need to get him on irc 21:45:03 SlickNik: 1 > 0 21:45:08 yeah, I'm gonna switch back to 1 for now. 21:45:09 and thats what we have currently 21:45:12 agreed hub_cap. 21:45:18 sweet. <3 21:45:21 lol sweetheart 21:45:44 err… don't forget the space… :) 21:46:14 #action slicknik to switch back executors to 1 and then back up to 2 when the VM plugin is fixed. 21:46:23 okey movin on 21:46:26 #topic Backups 21:46:29 hold on, 1>0 but sweet < 3 ?? 21:46:38 * hub_cap mind is blown 21:46:51 wow 21:46:51 nice one kagan 21:47:00 someone's paying attention 21:47:10 so i guess all thats needed w/ backups is it to pass some tests eh? 21:47:16 well do we have said tests? 21:47:25 integration i mean 21:47:26 I'm working on said tests. 21:47:47 SlickNik: the enable swift thing got merged, so that should be one less issue 21:47:50 But I hit an issue where we don't have the root password on the restored instance to be able to connect to it. 21:48:16 cuz it overwrites it eh? 21:48:27 u mean for the osadmin user? 21:48:28 yep, prepare() 21:48:44 err maybe it didn't.. grapex only +2'd no approval: https://review.openstack.org/#/c/27291/ 21:48:45 either root or os_admin. 21:48:50 see this will be magically solved w/ heat /rimshot 21:49:19 oh really? 21:49:38 vipul: id like to see the jobs pass first 21:49:39 vipul: hub_cap said there was an issue 21:49:41 then ill approve it 21:49:42 root@localhost gets a random password when the db is initialized with secure() 21:50:07 let me see if our jenkins has cycles lol.. i'll re kick it 21:50:10 and os_admin password is stored in the my.cnf which is not part of the restored db. 21:50:20 do the integration tests run against stock mysql or percona on yalls jenkins? (kinda random) 21:50:35 think it just does mysql right now 21:50:44 need to do percona too soon 21:50:45 stock 21:51:00 cool. ya i bet once we go vm gate, we could do both a bit easier 21:51:04 hub_cap / vipul: I think there might be an issue with the int tests that we need to look at soon. I don't think they're running clean right now. 21:51:16 hmm really? 21:51:30 can someone do a fresh pull on a fresh vm and check em out? 21:51:34 any takers? 21:51:41 sure 21:51:52 <3 esp 21:52:29 /offtopic by the way, <3 is the phrase "less than or butt", kinda like less than or equal 21:52:31 test_instance_created seems to be failing consistently, not sure why. 21:52:41 SlickNik: ok we need to investigate for sure then 21:52:49 if thats the case, raise it in #reddwarf 21:52:52 esp: ^ ^ 21:53:11 k 21:53:14 kicked off another run for the swift patch 21:53:15 so moving on? 21:53:15 yes, we need to look into that asap. 21:53:20 I will know in a bit. 21:53:22 +1 21:53:23 vipul: cool 21:53:27 #topic Notifications 21:53:36 so lets spend a sec talking about exists 21:53:39 yup, will keep you guys appraised of the progress with backups. 21:54:04 we need to add a periodic task 21:54:05 for exist event, id really like to see us just copy what nova does 21:54:11 wrt periodic tasks too 21:54:23 there is a update to oslo that changes the timers for periodics 21:54:27 i can dig into it 21:54:30 it used to set it off the _end_ of the previous task 21:54:42 but it will set it off the _begin_ of the previouos task 21:54:51 so it keeps the timing ~consistant 21:54:54 need to refresh oslo then 21:55:02 vipul: i think its on the list, ehh juice? 21:55:13 did juice get periodic_task with his change? 21:55:20 I'm not sure if juice is pulling in periodic_task changes... 21:55:22 juice? 21:55:26 how should we handle running multiple nodes? 21:55:33 i'll have to see if it was one of the changes 21:55:41 #link https://review.openstack.org/#/c/26448/ 21:55:42 it may not have been pulled int 21:55:44 in 21:55:58 robertmyers: lets look @ how nova does it, or if it does.. 21:56:07 they might just disable it on any _other_ nodes 21:56:08 nothing obvious here: https://github.com/openstack/nova/tree/master/bin 21:56:22 not sure what the thing that runs periodically is 21:56:37 I did not do a complete refresh of oslo just what was dependent on notify 21:56:46 the periodic for the computes is a bin process 21:56:57 you cron it to run when you want 21:57:09 cp16net is it one of files in nova/bin? 21:57:14 should be 21:57:19 know which one? 21:58:16 i dont see it 21:58:16 the looping call was not pulled it - should I update the patch to include it? 21:58:50 juice: do it 21:59:15 well i remember seeing it a while back 21:59:17 k - I think robertmyers had a play list request for the patch as well 21:59:19 juice: I think that's a good idea 21:59:29 maybe that changed a little 21:59:33 as soon as the gate is working again :) 21:59:41 #link https://github.com/openstack/nova/blob/master/nova/compute/utils.py#L181 22:00:25 hub_cap: nice that's a good starting point 22:01:25 moving on? 22:01:38 sounds good by me. 22:01:44 +1 22:01:46 #topic RootWrap 22:02:26 That's just been on the back burner. 22:02:30 okey moving on? 22:02:35 yep 22:02:39 +1 22:02:45 Don't think anyone is actively looking at it? 22:02:52 we arent @rax 22:03:01 same here @hp. 22:03:01 it's a nicety 22:03:10 not critical 22:03:26 #Quotas XML 22:03:29 lol 22:03:33 #topic Quotas XML 22:03:38 i think this is from last wk 22:03:39 Seems like that was fixed long ago. :) 22:03:40 did we clear it up? 22:03:40 I thought we had this working. 22:03:49 Yep, the notes are just really old. 22:03:53 Yeah, esmute / esp cleared this up. 22:04:01 If I recall correctly. 22:04:09 ok removed 22:04:10 yup 22:04:10 I think I squashed the last xml quota bug 22:04:31 good job guys. :) 22:04:31 The whole hostname client thing is probably overcome by events as well. 22:04:43 i nuked it too grapex 22:04:51 #topic action/events 22:05:06 That's all you, hub_cap 22:05:15 ive got the code done honestly, and it works for all the complex instance based events 22:05:24 its not working yet for databases/users 22:05:30 but i figured that could come a bit later 22:05:41 i need to write tests... ive verified it manually at present 22:05:47 but im OBE 22:05:53 OBE? 22:06:15 overcome by events 22:06:17 wan kenobi? 22:06:22 ah, okay 22:06:23 lol 22:06:26 sure that too 22:06:36 im sooooo gonna die 22:06:43 oooops star wars spoiler 22:06:54 Don't let the heat get to you… :) 22:06:59 HA 22:07:12 if someone wnats to pick it up to finish it, ill hand it off 22:07:15 SlickNik: The heat is on. 22:07:19 otherwise ill get to it prolly like in 2 wks 22:07:22 err in 1 wk 22:07:48 crickets 22:07:50 lol@grapex. 22:07:52 grapex: queue eddie murphy 22:08:02 lol vipul 22:08:05 ok so 22:08:16 #topic open discussion 22:08:23 XmlLint update! I got the code finished but found out there are some bugs. 22:08:27 anyone have anything to add to this wonderfully off topic meeting 22:08:28 btw the periodic usage events are fired off by a periodic task when compute is started up 22:08:29 #link https://github.com/openstack/nova/blob/master/nova/utils.py#L74 22:08:34 Just two though, when you request versions that aren't there. 22:08:55 woah cp16net good find 22:08:57 #link https://wiki.openstack.org/wiki/NotificationEventExamples 22:09:11 thx cp16net 22:09:30 grapex: Do the int tests for test_request_bogus_version and test_no_slash_with_version need to be changed? 22:09:44 Looks like they have been passing (when rdjenkins was still working well)… 22:09:58 wow cp16net grapex they did a lot of work to make sure usage works https://github.com/openstack/nova/blob/master/nova/utils.py#L370 22:10:12 SlickNik: maybe added to. The problem is we don't return an XML element or JSON table if the version isn't found. 22:10:20 I have a feeling that bug is going to be a royal pain... 22:10:26 lol i misspoke https://github.com/openstack/nova/blob/master/nova/utils.py#L421 22:10:46 grapex: have u added bugs for those problemos? 22:10:58 #link https://github.com/openstack/nova/blob/master/nova/compute/manager.py#L3600 22:11:03 SlickNik: Also, this may not be clear, but how the XML lint stuff works is it gets called every single time the client makes a request and gets a response, no matter what. So it runs for the versions tests even though we don't check the request / response body normally. 22:11:17 hub_cap: I can... its super minor. No one likes the poor versions API honestly. 22:11:21 sweet robertmyers our job is done :) 22:11:23 I guess I will just for the heck of it 22:11:24 robertmyers: line 3600! 22:11:30 grapex: exactly 22:11:32 diggin deep 22:11:36 let someone who cares handle it 22:11:41 yup if CONF.instance_uage_audit 22:11:46 robertmyers: more like OMG WHY IS THIS SO LARGE OF A FILE 22:12:00 liol 22:12:07 ah, grapex I see. Thanks for the clarifications. 22:12:13 we need an extensions api... just remembered 22:12:14 hub_cap: Because its pythonic to put multiple classes in one file and make them huge? 22:12:25 i think other openstack proj list extensions available? 22:12:33 oh.... 22:12:35 #link https://blueprints.launchpad.net/nova/+spec/libvirt-exists-support 22:12:37 vipul: good call. we neeed to fix that 22:13:07 heh cp16net... we almost didnt have to do any work ;) 22:13:22 heh 22:13:24 is there a bp... i know we talked about it before vipul 22:13:30 don't think so 22:14:24 #link https://blueprints.launchpad.net/reddwarf/+spec/extensions-update 22:15:01 stevedore 22:15:08 done 22:15:08 I think this involves another update from oslo… 22:15:30 def robertmyers 22:15:34 thats the plan 22:16:08 +1 to stevedore 22:16:29 SlickNik: i know we will have to remove the openstack common extensions stuff 22:16:49 ok well weve got taht under control now 22:16:52 anything else? 22:16:56 we are 15 over 22:17:00 i'm good ... good meet 22:17:04 word 22:17:17 I'm good as well. 22:17:31 Thanks all. 22:17:36 pees! 22:18:02 hub_cap, we can have the extensions discussion offline in #reddwarf. 22:18:07 imsplitbit: now thats just potty humor /rimshot 22:18:13 def SlickNik 22:18:16 #endmeeting