21:00:33 #startmeeting nova 21:00:33 Meeting started Thu Mar 12 21:00:33 2015 UTC and is due to finish in 60 minutes. The chair is mikal. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:00:34 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:00:36 The meeting name has been set to 'nova' 21:00:37 _o 21:00:40 \o 21:00:41 Hey everyone 21:00:41 \o 21:00:42 o/ 21:00:53 o/ 21:00:53 So, as usual the agenda is at https://wiki.openstack.org/wiki/Meetings/Nova 21:00:58 Let's get started 21:01:06 #topic Kilo release status 21:01:17 So, feature proposal freeze has now hit 21:01:18 o/ 21:01:26 This shouldn't be a big surprise as its the same process as Juno 21:01:37 We have about a week before we hit dependancy freeze and string freeze 21:01:49 So we need to make sure things which change strings or deps land before then 21:02:14 The same applies for features in general, but especially for things which change deps and translatable strings 21:02:17 are we cutting a novaclient release before dep freeze? 21:02:31 mriedem: has there been change in the client to justify it? 21:02:36 idk 21:02:39 :) 21:02:41 mriedem: I'm not opposed 21:02:51 o/ 21:02:52 melwitt: had a bug on the agenda for novalcient 21:02:54 Just not sure if anything worth the work has happened there in the last couple of weeks 21:02:58 there's a bug in the client that I put under Bugs on the agenda 21:03:06 we should cut a release, if nothing else there have been some docs changes that make our examples actually work 21:03:20 Ok, if there's a need we should do it 21:03:26 but yes, there have been bug fixes, doc fixes 21:03:41 nothing makes me more ragey than code examples that don't work :) 21:03:41 melwitt: should we delay that release until your bug fix lands? 21:04:35 So, the bug melwitt is referring to is 143154 21:04:40 mikal: it's been broken for awhile, the volumes.* python apis are broken. it would be best if we fix that first I think 21:04:44 Sounds like a big deal to me, and is marked critical 21:04:58 Well, fixing a critical should trigger a release anyways 21:05:06 So that gives us two excuses to do a release next week 21:05:22 melwitt: the bug doesn't have a code review associated with it 21:05:30 melwitt: and no assignee 21:05:49 melwitt: they aren't exactly broken, they route differently. fixing that should be careful because it would be easy to regress the cli in the process 21:05:57 Do we also need to land the microversions support in novaclient? 21:06:17 o/ 21:06:25 mikal: yeah, I opened it last night when I was reminded it was broken. and by broken I mean they 404 because they route to the wrong place 21:06:43 tonyb: on the api side, yes. it seems like the cli should use best available. 21:06:50 tonyb: I don't see any client reviews in the priority etherpad, is that an oversight? 21:07:15 and to fix it, I wasn't sure if I should just fix the urls to point at the nova volumes api proxy or to make them use the same magic the cli does to make it go direct to cinder 21:07:29 mikal: it's an oversight 21:07:39 melwitt: not all deployments expose cinder endpoints... 21:07:42 melwitt: honestly, I vote magic 21:08:00 mikal: then nova volume-list doesn't work for them 21:08:04 sdague: we should fix that 21:08:05 list of unreleased changes in python-novaclient - http://paste.openstack.org/show/191940/ 21:08:06 and hasn't for 2 years 21:08:29 this behavior has existed like this for > 2 years 21:08:34 #action Do a python-novaclient release next week, but fix 1431154 first 21:08:38 changing that... is something that requires a lot of thought 21:08:56 Oh, I am ok with providing incentive to deployers to expose cinder 21:09:15 So, we're decided we should do a release next week 21:09:21 ++ 21:09:29 I can ping openstack-dev and ask for a list of the microversion patches we need to land first 21:09:30 yeah, magic would at least make volumes.list() do the same thing that 'nova volume-list' does 21:09:54 #action Mikal to chase microversion changes for the client before release 21:10:13 Is there anything else we need to remember to do for novaclient before release? 21:10:18 I'll add the novaclient microversion review to the etherpad now, I put it under Open Discussion for today 21:10:32 melwitt: I just moved it up to priorities 21:10:34 melwitt: there's only one? 21:10:50 melwitt: because: microversions, hope that's okay 21:10:57 microversion review - https://review.openstack.org/#/c/152569/ 21:11:06 mikal: I think so. I'm not a microversion expert though, so I didn't know if it does everything needed to support 21:11:20 Ok, sounds like I am off the hook for that email then 21:11:37 So, I feel like we've got novaclient under control 21:11:44 dansmith: cool, thanks! 21:11:45 Is there anything else about the release status we need to cover? 21:11:56 There was a thread overnight clarifying that you can still fix bugs 21:12:00 Even after string freeze 21:12:05 wut?! 21:12:07 (as long as they don't change strings) 21:12:20 what if the bug is in the string? 21:12:22 mriedem: is that sarcasm? 21:12:26 Sigh 21:12:32 mikal: yes 21:12:37 dansmith: IIRC in the past we've punted on those after freeze 21:12:42 mikal: sarcasm 21:12:44 mikal: code freeze is on Thursday ? I was remembering some things happening on Tuesdays when FF 21:12:44 mriedem: please be gentle, still on my first coffee 21:12:52 or just "dansmith being an ass" 21:12:55 Heh 21:13:12 bauzas: dunno, that date was picked by Mr TTX 21:13:19 why is he yelling? 21:13:25 Have you met him? 21:13:29 Ok, moving on 21:13:35 #topic Priorities 21:13:57 So, we've discovered we care about the microversions stuff in novaclient, so I think that's covered apart from remembering to review it? 21:14:14 Looking at the tracking etherpad there is a lot of strikethrough 21:14:19 Which I personally find distracting 21:14:24 mikal: me too 21:14:28 Are people ok with me doing a cleanup to remove complete things? 21:14:32 +1 21:14:42 +1 21:14:42 yeh, lets delete 21:14:48 +1 21:14:54 sure 21:14:55 keep the strikethroughs for the bug list at the bottom though 21:14:56 Cool 21:15:02 although it makes it look like we've done a lot :) 21:15:03 that's good for metrics on merged stuff 21:15:04 so one things on priorities 21:15:08 Yeah, those are after the open ones so that's ok with me 21:15:18 dansmith: do tell 21:15:18 sdague: we switched it into a counter 21:15:22 dims: oh, ok 21:15:41 I'm halfway done with my context-ectomy set, which is all merged, but I have a couple more I need to put up yet 21:15:55 it's not really a feature, but also not a bug, it's cleanup 21:16:05 however, I want to get them all squared away in kilo, 21:16:22 because in lemming when we move to oslo.versionedobejcts, the syntax will change slightly 21:16:22 dansmith: will the remaining ones be invasive? 21:16:34 and if we clean up in kilo then backports will be easier, and kilo will be fully clean 21:16:40 nope, it's almost removing dead code 21:16:51 I think its ok to keep going with those then 21:16:56 I made it dead code, now it needs cleanup 21:16:59 yep, +1 21:17:00 I feels like a bug fix to me 21:17:02 okay, cool, just wanted to make sure we were okay with that 21:17:06 Even if its not a bug 21:17:08 mikal: it kinda is, but.. 21:17:16 anyway, sounds uncontroversial 21:17:29 mikal: maybe it would be worth mentioning a FeatureFreezeException process before FF ? 21:17:32 Yeah, in the same way that I am ok with continued cleanup of API unit tests 21:17:43 FFE before FF? 21:17:45 we've had 2 FFEs 21:18:02 bauzas: you mean for priority features? 21:18:05 mriedem: nah, explain the process, not acting on it: 21:18:09 FFEs at this point would only be for priorities, right? 21:18:10 well, I thought we were always good for test work until we go into deep freeze 21:18:12 mikal: I mean for anyone 21:18:21 bauzas: that's been and gone 21:18:26 there shouldn't be FFEs after 3/19 21:18:31 bauzas: the non-priority FFE process was weeks ago 21:18:38 oh ok 21:18:53 :) 21:19:06 OK, back to priorities 21:19:16 Do we have any that we think are at risk of not finishing their kilo work? 21:19:29 do we add any priorities based on the ops meetup feedback? 21:19:39 some have already been deferred thanks to john 21:19:41 dims: not for kilo I wouldn't think 21:19:48 dims: not unless they are bugs 21:19:57 cool mikal mriedem 21:19:59 that's business as usual 21:20:26 Ok, it sounds like we can move on again 21:20:32 #topic Gate status 21:20:37 a-ok 21:20:40 So, I've been travelling this week so a bit distracted 21:20:43 Tell me of the gates 21:20:46 ^ 21:20:50 I am disappointed nothing blew up 21:20:53 dims: I think the only kilo priority is the scheduler reporting bug, which I'll work on tomorrow 21:20:58 things blow up, just not our problem right now 21:21:04 Ok, good 21:21:05 seems to be pretty good aside from being jammed full of 300 things racing towards freeze 21:21:07 but stable 21:21:11 If everything was really ok I'd be confused 21:21:22 sdague: there was a quota one that jogo dug up too 21:21:24 sdague: which bug ? 21:21:26 So, on that gate wedging thing... 21:21:33 bauzas: the one we talked about this morning 21:21:35 oh infra is having issues with hosts, does that make you feel better? 21:21:35 sdague: that's a bug, not a new priority though right? 21:21:38 sdague: you mean the one we discussed ? 21:21:38 ok 21:21:44 dansmith: correct 21:21:45 If I knew what patches to babysit through merge I'd be happy to sneak things in over the weekend while everyone sleeps 21:21:57 if it's wedging the gate, i think that's covered 21:21:59 So, if you're sitting on an approval that is in gate hell, ping me and I can keep an eye on it 21:22:01 and the quota thing is a can of worms 21:22:23 anteaya: it does, thanks 21:22:30 mikal: happy to help 21:22:38 yeh, the quotas thing is mostly figuring out a test strategy I think 21:22:47 and ++ for the nocturnal +A's they are the best 21:22:49 at least in the short term 21:22:53 So yeah, if you have an approved patch you deeply care about merging, email me and I'll keep an eye on it 21:23:03 Moving on again I suspect 21:23:04 I think jogo was thinking on that yesterday 21:23:19 #topic Bugs 21:23:28 Nova is finally bug free, yes? 21:23:38 dansmith: yup, right now trying to reproduce the issues 21:23:47 jogo: cool 21:24:12 so on bugs 21:24:14 this came up last week https://bugs.launchpad.net/nova/+bug/1323658 21:24:16 Launchpad bug 1323658 in OpenStack Compute (nova) "Nova resize/restart results in guest ending up in inconsistent state with Neutron" [Critical,Confirmed] 21:24:16 mikal: i am trying to keep New/Undecided around 30 21:24:17 This cpu time thing looks valid, but has a proposed fix 21:24:33 i put up several chagnes to help debug which are merged, and a revert of the skipped tests in tempest 21:24:38 after like 9 rechecks those haven't failed 21:24:42 so i'm not sure that's critical for k-3 21:24:54 mriedem: this is for 1323658, yes? 21:24:59 yes 21:25:08 sdague: which reminds me, wanna drop the -2 here? https://review.openstack.org/#/c/161768/ 21:25:15 mriedem: so you're saying it might not be a critical bug any more? 21:25:24 i think that's exactly what i said :) 21:25:40 mriedem: done 21:25:43 okay, thanks. I put that bug there because I was concerned if it was a serious issue we needed to workaround asap 21:25:47 mriedem: can you uncritical it in LP then please? 21:25:50 sure 21:25:53 Ta 21:26:23 otherwise https://launchpad.net/nova/+milestone/kilo-3 21:26:25 mriedem: though you failed unit tests in tempest now 21:26:39 sdague: that's a thing 21:26:42 known issue 21:27:27 i'm not sure why https://bugs.launchpad.net/nova/+bug/1383465 is still critical for k-3 when it's been around since last october? 21:27:29 Launchpad bug 1383465 in OpenStack Compute (nova) "[pci-passthrough] nova-compute fails to start" [Critical,In progress] - Assigned to Yongli He (yongli-he) 21:27:38 (ignore my earlier comment, got mixed up about which bug we were talking) 21:27:40 where is the patch for that? 21:27:45 it's a pretty important issue 21:27:48 https://review.openstack.org/#/c/131321/ 21:28:07 mriedem: since it has the potential for nova-compute to bail out totally 21:28:20 yeah, i mean, it *sounds* bad 21:28:26 okay, we need to get eyes on that one I think 21:28:32 but yeah, maybe not critical 21:28:39 but we broke that in juno 21:28:47 and it would be real bad if we don't merge the fix in kilo :/ 21:29:24 agree 21:29:29 I will add it to my review list for later today unless people beat me to it 21:29:45 this one I put https://bugs.launchpad.net/nova/+bug/1372670 libvirt bug fixed upstream in newer release, but the proposed fix involves a workaround config option and can't seem to get an agreement whether to have it be configurable or not 21:29:46 Launchpad bug 1372670 in OpenStack Compute (nova) "libvirtError: operation failed: cannot read cputime for domain" [High,In progress] - Assigned to Eduardo Costa (ecosta) 21:30:17 this is the only other critical k-3 bug https://bugs.launchpad.net/nova/+bug/1431201 21:30:18 Launchpad bug 1431201 in OpenStack Compute (nova) "kilo controller can't conduct juno compute nodes" [Critical,In progress] - Assigned to Sylvain Bauza (sylvain-bauza) 21:30:32 mriedem: we're on that one 21:30:35 k 21:30:42 mriedem: it is a recent regression 21:30:51 I dunno why, 21:31:06 sdague: FYI for that one, if we can put 'old' nova in a venv we can gate in restarting old nova-compute with new nova-* running 21:31:10 mriedem: I'm on that bug, fairly good progress with the help of dansmith 21:31:11 but I always find it funny when people use "conduct" as the verbular description of what conductor does :) 21:31:15 Can i request targetting https://review.openstack.org/136931 / bug 1424462 to k-3 as well, the code's been up for a while it just needs another +2 21:31:17 bug 1424462 in OpenStack Compute (nova) "Nova/Neutron v3 authentication" [Medium,In progress] https://launchpad.net/bugs/1424462 - Assigned to Jamie Lennox (jamielennox) 21:31:20 jogo: or we could look at the logs 21:31:41 sdague: well, restarting compute will find a whole other class of issues 21:31:49 jamielennox: I will add that to my review list too 21:31:49 sdague: well we don't test the startup code for old nova-compute in partial grenade 21:31:49 sdague: but for this one, looking in the logs is good 21:32:03 dansmith: +1 21:32:09 but if this one was in the logs all along ... sigh 21:32:10 jogo: yeh, that's just a lot of infrastructure that we don't have ready yet 21:32:18 jogo: right, and it breaks too often 21:32:26 but it's non-trivial 21:32:32 I think with multinode it would be easier right? 21:32:34 sdague: yup, just pointing that out for when that infra is in place 21:32:46 there is multinode now 21:32:48 jogo: yeah it wasn't noticed because I was not aware that grenade was not covering n-cpu startups nor periodic tasks 21:32:51 in experimental 21:32:55 dansmith: actually yeah ... wouldn't be hard to throw something together for that but may be overkill honestly 21:33:06 mriedem: right, I mean trying to do a test that restarted it would be easier in multinode vs. adding a venv supported thing 21:33:13 bauzas: well it should cover periodic tasks, but tempest means they don't always fail 21:33:24 jogo: well, it'd be cool I think, but yeah 21:33:43 jogo: I mean that it was something unnoticeable by Tempest 21:33:46 yeh, lets take this offline. Honestly, for this case, log inspection will nail this regression (and others like it) 21:33:55 sdague: agreed 21:33:56 sdague: agreed 21:34:01 So moving on now? 21:34:09 and in L with some venv support, we can do other things 21:34:15 but that's all post release 21:34:15 you mean lemming 21:34:37 #topic Stuck reviews 21:34:45 * mdbooth has one 21:34:50 https://review.openstack.org/#/c/158269/ 21:34:50 So, we have https://review.openstack.org/#/c/158269/ as a candidate for discussion 21:35:20 My argument on this one is the same as it was before. 21:35:32 It's a serious bug, and this is the only fix on the table for it. 21:35:41 well, there are other ways to fix this 21:35:45 like the one you proposed first 21:35:49 which I'd rather see than this 21:35:51 The proposed fix simply adds an assertion around existing code assumptions. 21:35:53 * jogo doesn't want to have the exact same discussion again 21:35:57 jogo: +1 21:35:59 So it's good. 21:36:25 The objection to it seems to be that it makes a change to an unpopular feature without removing it. 21:36:41 I can rebut the objections specifically if that would help anybody. 21:37:05 So, I think there are two issues here 21:37:17 We should delete all of the instances for unlucky deployers 21:37:22 That seems ... bad 21:37:41 I think we can discuss the underlying design problem, but that's probably better at the summit 21:37:46 so, the evacuate code is broken, and broken for other hypervisors that this doesn't address. I've got a spec up for lemming 21:37:51 and I'm going to commit to fixing it 21:37:51 And not something we're goign to fix in kilo 21:38:55 spec: https://review.openstack.org/#/c/161444/ 21:39:32 summit topic? 21:39:35 at least meetup day? 21:39:41 For the larger problem, yes 21:39:43 dansmith: I'm sure that's very good, but it's not good to go now, and it doesn't fix any of the other problems. 21:39:45 we need a summit topic around this and the larger problem for sure 21:40:02 Do we think we can land something to stop the instance loss problem in kilo that wont cause us heaps of pain later? 21:40:03 mdbooth: it will fix *ever* deleting instances when we shouldn't 21:40:03 Also, there is nothing in my fix which would prejudice fixing evacuate. 21:40:24 mikal: This fix is good to go. 21:40:41 mikal: we landed a workaround flag that lets you disable the destructive behavior if you want 21:40:44 mdbooth: well, except that three cores -1'ed it 21:41:02 dansmith: did that land? I thought that was blocked? 21:41:08 mikal: You have to read why, though. 21:41:19 mikal: which is a stopgap which is good enough for me in the short term, and for particularly susceptible hypervisors they can fix it in their own drivers 21:41:20 mikal: As I said, this is an unpopular feature. 21:41:22 sdague: nope, it landed 21:41:32 There seems to be a kneejerk reaction against anything relating to it. 21:41:34 sdague: t'was unblocked 21:42:04 2 conversations? :) 21:42:11 https://review.openstack.org/#/c/159890/ 21:42:19 that's the config option added to disable 21:42:20 I feel like we're not going to resolve this before we run out of time 21:42:23 so we have now just had the exact same conversation as we did last week 21:42:24 mdbooth: I honestly think it deserves a discussion at the Summit for this 21:42:26 can we just move on 21:42:42 So... 21:42:47 mdbooth: is it related to this? https://blueprints.launchpad.net/nova/+spec/vmware-one-nova-compute-per-vc-cluster please add it there if so 21:42:48 bauzas: This is a bugfix. Summit is for design discussion. 21:42:55 dims: No. 21:43:03 The workaround flag stops the instance loss situation if the deployer uses it with vmware or ironic, yes? 21:43:09 mikal: yes 21:43:11 mikal: No 21:43:19 wtf? of course it does 21:43:32 Firstly, the user would have to manually enable it, which they never would. 21:43:37 that's what he just said 21:43:45 let's move on 21:43:47 mdbooth: IMHO that's more than an fix, but rather something directly hitting operators 21:43:49 Secondly, if anybody enables it, it breaks evacuate. 21:44:05 Ok, I thought I'd give it one more go. 21:44:09 I'll abandon it. 21:44:11 So, we have many flags where we expect the operator to know what to do 21:44:15 That's what documentation is for 21:44:16 operators will probably read the release notes before upgrading 21:44:18 Thanks for listening. 21:44:24 this should be a flag called out in the release notes 21:44:33 mriedem: I'll add it 21:44:37 Its got a docimpact on it 21:44:47 If you enable it, it breaks evacuate 21:45:00 My fix doesn't need to be enabled, and doesn't break evacuate. 21:45:03 evacuate is broken for those drivers 21:45:05 FWIW 21:45:07 config is always redone per release, docimpact doesn't mean much otherwise i don't think if you don't follow up yourself with release notes 21:45:22 My fix doesn't need to be enabled by anybody, and doesn't break evacuate by anybody. 21:45:23 MOVING ON 21:45:35 jogo: breath deeply 21:45:36 +1 for moving on 21:45:42 Ok, let's move on 21:45:52 #topic Open Discussion 21:46:00 Nothing on the agenda 21:46:00 unrelated to this - https://review.openstack.org/#/c/150929/ - ec2 deprecate 21:46:04 ooh 21:46:07 a much better topic 21:46:10 Heh' 21:46:17 sdague: tempest seems pretty unhappy 21:46:17 I wanted to bring up https://review.openstack.org/#/c/157501/ at one of these meetings. 21:46:18 sdague: I tried to +2 that eight times, but gerrit wouldn't let me 21:46:20 though I have to fix code one more time because of oslo_log 21:46:21 sdague: so it seems like ops at the meetup were not freaked out? 21:46:21 something with that s3server change 21:46:34 they were less freaked out for sure 21:46:48 they want a story forward about ec2 being available in the ecosystem 21:46:49 this is targetting for L now so I didn't want to take time in priorites but obondarev_ has a new patchset up for the nova-net to neutron migration proxy: https://review.openstack.org/#/c/150490/ 21:46:59 sdague: I haven't gotten around to tweaking the review to meet ttx's requirements, have you? 21:47:06 I did not get yelled at when I talked about the path into the stackforge project 21:47:19 I talked with ttx at the meetup, he's ok with deprecation now 21:47:25 Pardon? 21:47:29 it was a word definition issue 21:47:32 Did you talk to him while holding a knife? 21:47:37 nope 21:47:37 lol 21:47:38 umm http://logs.openstack.org/29/150929/3/check/gate-nova-python27/062df96/console.html#_2015-03-12_19_07_42_323 21:47:47 jogo: yes, oslo_log 21:47:48 jogo: yeah, needs work a bit 21:47:50 fixing now 21:48:02 they were in philly, so "at gunpoint" could have been a thing 21:48:14 All I know about philly is the cheese 21:48:23 cheese steak 21:48:26 hmmm... cheese steak 21:48:28 cream cheese 21:48:32 cheeeeese 21:48:32 Yeah, that one 21:48:39 sdague: thanks! 21:48:47 So apart from discussing our love of cheese are we done here? 21:48:50 * bauzas heard the word 'cheese' ? 21:48:52 anyway, ttx is supportive of this being called deprecation, I asked him to add comments there 21:48:58 sdague: was that a trick to allow dansmith to +2 it again? 21:48:59 mikal: bknudson had a review to look at 21:49:03 wanted to bring up https://review.openstack.org/#/c/157501/ quickly... 21:49:05 jogo: :D 21:49:05 heh 21:49:05 sdague: that is the exact opposite of what he told me 21:49:10 sdague: so I will proceed to be confused 21:49:15 essentially the start of a long project, so wanted some quick feedback on it. 21:49:19 mikal: let's stack up some +2s on there, let ttx comment and you can +W, okay? 21:49:30 as in, if it's a non-starter. 21:49:31 dansmith: works for me 21:49:43 * dansmith drops the mic and walks out 21:50:00 bknudson: If it's part of a long project that feels like an L spec to me 21:50:19 yeh, this seems more spec like to me 21:50:21 tonyb: yes, I'll put a spec up for L. 21:50:22 bknudson: yeah, I think a discussion of why its needed would be good 21:50:29 bknudson: and a spec is the right framework for that 21:50:33 Cool 21:50:46 mikal: can haz early mark? 21:50:48 bknudson: FWIW I think the change is okay but there is no context on why and what it gets us? 21:50:58 tonyb: ++ 21:50:58 For example, if we were going to do that, how does it map into perhap using a rootwrap daemon in the future? 21:51:06 anyone able to review https://review.openstack.org/#/c/150490/ next week at all? or way too busy? 21:51:26 anteaya: no promises at this point in the release cycle unfortunately 21:51:31 also, unless we actually parameter filter in our rootwrap I think it's pointless for compute 21:51:39 mikal: figured as much, just need to tell obondarev_ that I tried 21:51:54 the goal of this is to get a start on doing rootwrap or whatever correctly. 21:52:00 Sounds like we have a plan for the priv separation thing though 21:52:09 bknudson: right, but you've looked at the compute policy right? 21:52:18 bknudson: I know neutron has been doing stuff around that, so it would be interesting to learn from what they've done as well 21:52:29 it's got at least half a dozen own your box escalations 21:52:34 But yeah, I think this meeting is done? 21:52:40 soo done 21:52:44 Heh 21:52:46 we looked at the policy over the OSSG meetup. There's going to be a lot of work to get it all cleaned up. 21:52:46 Going... 21:52:51 can i talk about evacuate some more with jogo?! 21:52:56 ...going... 21:53:03 haha 21:53:05 mriedem: sure, just not here 21:53:05 will spec it. 21:53:06 * dansmith smacks mriedem with a fish 21:53:10 ...gone 21:53:13 #endmeeting