21:00:48 <russellb> #startmeeting nova
21:00:49 <openstack> Meeting started Thu Aug 15 21:00:48 2013 UTC and is due to finish in 60 minutes.  The chair is russellb. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:50 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:52 <openstack> The meeting name has been set to 'nova'
21:00:59 <russellb> Hello, everyone!
21:01:01 <hemna> hey
21:01:02 <mrodden> hi
21:01:03 <russellb> #link https://wiki.openstack.org/wiki/Meetings/Nova
21:01:06 <n0ano> o/
21:01:09 <NobodyCam> o/
21:01:13 <hartsocks> o/
21:01:13 <russellb> annegentle: around?
21:01:14 <alaski> hi
21:01:17 <cyeoh> Hi
21:01:17 <dansmith> o/
21:01:26 <annegentle> o/
21:01:28 <russellb> great
21:01:31 <llu-laptop> o/
21:01:34 <russellb> #topic compute admin guide
21:01:40 <annegentle> woo I'm up
21:01:42 <mriedem> hi
21:01:49 <russellb> annegentle: yep!  what's up?
21:02:15 <annegentle> so the guide mostly known as the Compute Administration Guide will be likely removed as that title, and distributed to the new Configuration Reference, User Guide, and Operations Guide
21:02:34 <annegentle> it got really massive, bloated, etc. and we're parting it out.
21:02:53 <russellb> so, restructuring based on target audience?
21:02:54 <annegentle> I want to make sure you're all good with that, and that I'm not missing some reason not to do that.
21:03:01 <russellb> mostly anyway
21:03:12 <annegentle> yes, the audience and their tasks. We find that "admin" and "operator" are mostly synonymous.
21:03:19 * russellb nods
21:03:45 <annegentle> it may mean some "where'd that page go?" at release time, but I'm hopeful we'll get htat sorted out.
21:04:08 <russellb> could just be replaced with a page for a while explaining the split?
21:04:41 <annegentle> also, about 1/4th of docs visitors go to /trunk/, so we'll be working towards only about 2 guides going under the docs.openstack.org/havana umbrella
21:05:27 <annegentle> russellb: could be, yeah. I think the move towards only 2 documents under a release umbrella and many documents under /current/ or /trunk/ will help people see where the site's going
21:05:46 <annegentle> we'll certainly keep an eye out for patches as you near the features freeze, and help you find the right placement
21:05:47 <russellb> ok, well sounds fine to me, and i appreciate the heads up
21:06:07 <russellb> any questions or concerns from anyone?
21:06:16 <russellb> or virtual high fives for the docs team?
21:06:21 <annegentle> heh
21:06:28 * dansmith high-fives annashen
21:06:30 <dansmith> dammit
21:06:33 * dansmith high-fives annegentle
21:06:34 <dansmith> tab fail
21:06:35 <annegentle> heh heh
21:06:50 * annegentle lobs an 'e' to dansmith
21:06:56 <dansmith> yeah, I suck
21:07:07 <russellb> i'm sure annashen appreciated the high five, too.
21:07:14 <dansmith> heh
21:07:18 <annegentle> I really think the amount of content we have is astonishing, but it also concerns me that it's outdated all the time :)
21:07:29 <annegentle> so be on the lookout for DocImpact bugs you can pick up because you know the code
21:07:43 <annegentle> that's all I got! We're ready for the onslaught :)
21:07:52 <russellb> yeah ... i wish we did a better job helping keep it up to date, beyond just tagging things with DocImpact
21:08:01 <russellb> but i haven't come up with any brilliant suggestions
21:08:19 <russellb> obviously, getting in there and writing stuff would help :)
21:08:41 <russellb> but how to generally encourage that kind of thing as features come in
21:08:46 <russellb> something to think about some more
21:08:53 <comstud> ohmtg
21:09:04 <russellb> comstud: way to announce how late you are :-p
21:09:13 <annegentle> yeah definitely just get it out of your brain onto text of any form
21:09:17 <annegentle> heh comstud
21:09:21 <timello> (late) o/
21:09:32 <harlowja> (late 2) \o
21:09:34 <annegentle> the docs team can take it the rest of the way
21:09:37 <russellb> annegentle: yeah, maybe we'd have better luck just ensuring there's a wiki page
21:10:06 <russellb> ensuring the blueprint includes or links to some form of user doc content before the feature patch is merged ... something like that
21:10:11 <annegentle> russellb: we read the blueprint pages as much as possible, those are good places to braindump
21:10:37 <russellb> yeah, i just don't think we (in nova anyway) really put as much efffort into ensuring the content is there by the end of it
21:10:50 <russellb> sometimes the user details aren't finalized until they very end of development, and the blueprint content is usually written much earlier
21:11:01 <russellb> i know you know all of this well :)
21:11:10 <annegentle> russellb: we have really good tagging of our docimpact stuff and automated much of it
21:11:12 <russellb> just acknowledging that we could and should do better i guess
21:11:17 <annegentle> russellb: and, we automated a bunch of config info
21:11:22 <russellb> nice
21:11:31 <annegentle> sure, we keep eating the elephant a bite at a time
21:11:35 <russellb> ha
21:11:37 <russellb> fair enough
21:11:39 <russellb> anything else?
21:11:56 <annegentle> that's it
21:11:59 <russellb> cool, thanks!
21:12:04 <russellb> #topic havana-3 status
21:12:12 <russellb> #link https://launchpad.net/nova/+milestone/havana-3
21:12:31 <russellb> deadline to have patches *proposed* is EOD Wednesday, August 21
21:12:34 <russellb> less than 1 week away
21:12:53 <russellb> we're actually in pretty good shape, lots of stuff is already up for review
21:13:11 <russellb> then it'll be an insane review rush up to the feature freeze ...
21:13:20 <russellb> so with all that said ... let's dive into specific blueprints
21:13:28 <russellb> any specific ones folks here would like to cover?
21:13:58 <russellb> anything on this list that people know won't make it and should be deferred?
21:14:38 <russellb> vishy started a thread about his live snapshot work: http://lists.openstack.org/pipermail/openstack-dev/2013-August/013688.html
21:14:40 <russellb> #link http://lists.openstack.org/pipermail/openstack-dev/2013-August/013688.html
21:14:46 <russellb> we talked about it last week
21:15:00 <russellb> looks like he's trying to build some consensus around whether that will go in or not
21:15:11 <russellb> so if you have an interest in the feature, please provide input on that thread
21:15:49 <russellb> no blueprints anyone wants to cover?  everyone heads down in panic coding mode?  :)
21:15:58 <timello> hehe
21:16:11 <dansmith> russellb: yes
21:16:13 <russellb> timello: how's your cold migrations migration coming along?
21:16:22 <harlowja> heads down, trying not to panic, ha
21:16:41 <mriedem> russellb: this needs review: https://blueprints.launchpad.net/nova/+spec/powervm-configdrive
21:16:42 <yjiang5> russellb: can low priority one give information?
21:16:53 <mriedem> russellb: does the blueprint owner just change status to 'needs review'?
21:17:04 <russellb> mriedem: yeah, or I (or anyone in nova-drivers) can
21:17:09 <mriedem> ok
21:17:17 <russellb> updated
21:17:25 <russellb> yjiang5: sure
21:17:36 <timello> russellb: we got some important changes merged, I'm working on in the final step which is move things that are in compute.manager to conductor. Everything from the scheduler have been moved already
21:17:57 <timello> there is a WIP for it, hopefully today I'll submit the ready for review patch.
21:18:04 <russellb> timello: nice, sounds good
21:18:29 <yjiang5> timello: possibly we can have some changes to the resource tracker in I release .
21:18:50 <timello> yjiang5: yes, plus that...
21:19:39 <russellb> alright, moving on for now then
21:19:42 <russellb> #topic subteam reports
21:19:46 <russellb> any subteams want to provide an update?
21:19:52 <harlowja> \0/
21:20:25 <hartsocks> o/
21:20:43 <russellb> hartsocks: go ahead
21:21:00 <hartsocks> folks are heads down the next two weeks… but...
21:21:04 <hartsocks> We're tagging bugs with vmware-co-preferred to mark bug fixes or distro people to pull/pay attention to.
21:21:17 <russellb> vmware-co-preferred?
21:21:22 <russellb> what does that mean?
21:21:27 <hartsocks> so...
21:21:28 <mrodden> i was curious about that as well...
21:21:35 <hartsocks> vmware-co <- vmware company
21:21:42 <hartsocks> preferred <- fine wine.
21:21:45 <russellb> ok, so, i don't like that.
21:21:56 <dansmith> yeah.
21:21:57 <russellb> launchpad doesn't prevent people from putting whatever tags you want, but, i don't like it
21:22:10 <russellb> we don't have redhat-preferred, ibm-preferred, rackpsace-preferred, hp-preferred, etc etc
21:22:13 <jog0> russellb: there is precident with te cannonistack tag
21:22:18 <russellb> and i don't like that either
21:22:34 <hartsocks> Well… the idea is we wanted to find a way to mark publicly that these were patches that distro people should look to.
21:22:41 <jog0> russellb: fare enough, what do you propose instead
21:22:43 <hartsocks> and… we got the idea from cannonistack.
21:22:57 <russellb> that kind of thing is something for your company to figure out using your own system(s)
21:23:03 <russellb> for red hat, we have our own public tracker
21:23:12 <russellb> for canonical, they have other projects in launchpad
21:23:20 <russellb> but using the upstream bugs is not appropriate IMO
21:23:36 <hartsocks> I wanted to do this in a way that was public and upstream.
21:23:55 <russellb> public is good, but it belongs on a vmware tracker of some sort
21:23:56 <hartsocks> The idea is, I don't want to get into the business of maintaining these lists outside of the public arena.
21:24:07 <mrodden> would it be possible to create another launchpad project to use to track upstream project bugs...
21:25:00 <hartsocks> I'm open to suggestions.
21:25:14 <russellb> not using the upstream project bugs for noting your business priorities
21:25:16 <russellb> is my suggestion :)
21:25:26 <hartsocks> Hopefully we would only have to do this for a little while.
21:25:56 <russellb> i'm not sure how else to express it
21:25:57 <hartsocks> Well, the list comes from use-cases that don't flow through without the particular bug-fix.
21:26:13 <russellb> but i'm basically -2 on it
21:26:21 <clarkb> could you create a new LP org, add the bugs to that org then remove them from nova?
21:26:29 <clarkb> then you don't even need to copy pasta
21:26:40 <jog0> what to do about https://bugs.launchpad.net/nova/+bugs?field.tag=canonistack
21:26:42 <hartsocks> These are nova bugs.
21:27:17 <russellb> jog0: kill the tag preferably
21:27:23 <jog0> hartsocks: there may be a way to just use the launchpad API to make a list ala rechecks
21:27:45 <jog0> (http://status.openstack.org/rechecks/)
21:27:50 <russellb> https://bugs.launchpad.net/nova/+bugs?field.tag=vmware
21:27:58 <russellb> tagging something as affecting the driver is perfectly appropriate
21:28:11 <hartsocks> I'll take out the tags by next weeks' meeting. But we'll need to figure out what else to do.
21:28:12 <dansmith> hartsocks: presumably anything tagged as vmware-related and high priority == your special list anyway, right?
21:28:38 <jog0> russellb: this may be worth a ML post to notify canonistack folks as well
21:28:50 <russellb> sure, or just a more direct ping
21:28:54 <russellb> i wouldn't kill it without a heads up
21:28:58 <russellb> so they can make those notes elsewhere
21:29:02 <hartsocks> dansmith: yes, but… there are fixes we've identified as being necessary to completing a "workflow"
21:30:07 <dansmith> hartsocks: so if I could offer some experience,
21:30:07 <dansmith> from an org that has done this for a very long time,
21:30:08 <hartsocks> dansmith: we don't want to have a separate repo or anything for that, just let folks like canonical or whom ever know that's somethin for them to look at.
21:30:08 <mrodden> bugzilla
21:30:08 <clarkb> hartsocks: what about making the bug affect both orgs?
21:30:19 <clarkb> hartsocks: you can manage priority and stuff independently
21:30:43 <dansmith> this is why you build a tracker to manager your release and your relationship with your vendors, preferably with slick glue to keep it sync'd upstream
21:30:44 <russellb> #action get rid of all business-priority related tags in launchpad, to kill off this bad precedent
21:30:44 <russellb> #undo
21:30:45 <russellb> #action russellb to get rid of all business-priority related tags in launchpad, to kill off this bad precedent
21:30:45 <openstack> Removing item from minutes: <ircmeeting.items.Action object at 0x2644910>
21:30:45 <hartsocks> dansmith: okay, but I was looking for the simplest thing possible.
21:30:54 <russellb> hartsocks: but this is what every vendor has to do ...
21:31:02 <russellb> to keep your business stuff separate from the upstream stuff
21:31:04 <russellb> because they're not the same
21:31:05 <dansmith> hartsocks: I understand your intentions are good, but it's just not the right way
21:31:53 <hartsocks> So we'll have the vmware tags removed by next meeting. Just have to figure out how else to track this w/o hiding it.
21:32:09 <hartsocks> That's not even what I *really* wanted to talk about.
21:32:12 <mrodden> hartsocks: clarkb's suggestion is what i would do
21:32:38 <hartsocks> mrodden: I like it because it doesn't involve me figuring out how to build something new.
21:32:46 <mrodden> right
21:33:07 <hartsocks> Thanks. Glad I mentioned the topic in meeting.
21:33:21 <hartsocks> We are tracking these blueprints:
21:33:36 <hartsocks> Proposed, code posted, trying to get in good shape for core-reviewers
21:33:36 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/multiple-clusters-managed-by-one-service
21:33:36 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/vmware-nova-cinder-support
21:33:36 <hartsocks> #link https://blueprints.launchpad.net/cinder/+spec/vmware-vmdk-cinder-driver
21:33:36 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/improve-vmware-disk-usage
21:33:37 <hartsocks> #link https://blueprints.launchpad.net/nova/+spec/vmware-image-clone-strategy
21:33:43 <hartsocks> boom.
21:34:10 <hartsocks> That's what we hope to get into Havana in *priority* order. Note: priority for health of the driver.
21:34:33 <hartsocks> That's EOL for me then.
21:34:40 <russellb> cool, thanks
21:35:03 <russellb> harlowja: did you want to give an update?
21:35:13 <harlowja> def :)
21:35:27 <harlowja> so taskflow has gotten initial integration with cinder, yaaaaaa
21:35:43 <russellb> cool
21:35:46 <harlowja> otherwise mostly heads down, continuing persistence work there and hoping to get into nova soon :)
21:36:04 <harlowja> picking up steam i think, but not to much steam since everyone busy in H3+
21:36:05 <russellb> so a design summit session giving an update on this library and proposed integration with nova may be a good discussion.
21:36:10 <harlowja> agreed
21:36:23 <russellb> anything else?
21:36:32 <harlowja> #link https://wiki.openstack.org/wiki/TaskFlow/HavanaSummitPresentationAbstract#Speakers
21:36:35 <harlowja> that might happen ;)
21:36:52 <harlowja> but a design summit session sounds great to
21:37:05 <russellb> ah ok.
21:37:05 <harlowja> thats it with me, thx russellb
21:37:09 <russellb> k, thx
21:37:12 <russellb> any other subteams?
21:37:14 <harlowja> np :)
21:37:17 <n0ano> scheduler
21:37:49 <russellb> n0ano: go for it
21:38:13 <n0ano> long discussion on Boris' suggested scalability changes to the scheduler, no concensus, definitely post-havana, we'll need some sessions at the next summit
21:38:26 <n0ano> that's about it.
21:38:38 <russellb> yep, sounds good to me
21:38:44 <russellb> thanks!
21:38:46 <russellb> #topic open discussion
21:39:00 <russellb> anything else from anyone?
21:39:02 <jog0> not sure where best to throw this out there but  https://bugs.launchpad.net/nova/+bug/1212418
21:39:03 <uvirtbot> Launchpad bug 1212418 in nova "SQLAlchemy performs poorly on large result sets" [Undecided,Confirmed]
21:39:16 <jog0> SQLA's ORM is a no go for scale
21:39:45 <jog0> 3 second DB 50 in sqla
21:40:06 <russellb> ouch
21:40:18 <melwitt> I have posted to ML about possible removal of the security_groups extension from v3 api, if anyone has any input.
21:40:19 <russellb> how many instances are we talking about for the 2 seconds vs 53 seconds thing?
21:40:20 <comstud> that particular case would be better if we didn't join
21:40:32 <jog0> so we need to sort that out at the summit
21:40:32 <comstud> or we change what we join
21:40:48 <comstud> but in general, the ORM mapping results does suck
21:41:19 <jog0> russellb: that was for 600k lines I think which will happen at scale for several calls I think
21:41:41 <jog0> in general we see this overhead on small calls to just the absolute times are not as bad
21:41:43 <russellb> so, sounds like we can make it somewhat better
21:41:52 <jog0> I hope so
21:41:54 <russellb> ... and then there's the plan for a native mysql driver for some critical bits
21:41:56 * russellb looks at comstud
21:42:02 <dansmith> hey!
21:42:07 <dansmith> don't distract him
21:42:09 <dansmith> he has objects work due
21:42:13 <jog0> russellb: but we can't just depend on that for everything .. postgres folks
21:42:22 <russellb> jog0: sure, agreed
21:42:38 <russellb> jog0: need to make sqla work as best we can
21:42:38 <jog0> anyway we need a good plan for this  just an FYI
21:42:41 <jog0> russellb: we can use it without the ORM for some benifits
21:42:42 <comstud> :)
21:42:48 <cyeoh> melwitt: it sounds like it can be removed, just need to preserve the instance create part
21:43:07 <russellb> security group management?  yeah makes sense to me
21:43:22 <yjiang5> jog0: comstud should we do something in H release for this performance issue? Or wait till summit discussion?
21:43:39 <shanewang> jog0: will that go before havana? I mean without orm.
21:43:50 <jog0> yjiang5: no ORM wait I think
21:43:52 <cyeoh> russellb: this is associating/disassociating security groups with instances, originally we thought nova had to been in the loop, but it appears not
21:44:03 <jog0> but somethings can be done now
21:44:11 <jog0> like https://bugs.launchpad.net/nova/+bug/1212428
21:44:12 <uvirtbot> Launchpad bug 1212428 in nova "compute_node_get_all slow as molasses" [Undecided,Confirmed]
21:44:20 <russellb> cyeoh: ack, sounds good then
21:44:27 <shanewang> jog0: ack
21:44:46 <melwitt> cyeoh: why keep instance create part? since that's also just a neutron update port?
21:45:17 <russellb> nova creates the port though, right?
21:45:50 <shanewang> comstud: is the orm issue a known issue in sqla?
21:45:51 <jog0> shanewang: also more analysis about solutions is needed for summit
21:45:59 <jog0> shanewang: yes
21:46:10 <jog0> we chatted in  the #sqlalchemy room for abit on it
21:46:21 <cyeoh> russellb: I'm pretty sure you pass the port that nova should use on the instance creation
21:46:27 <comstud> there's some benefit in switching some things to use the core API vs the ORM API
21:46:36 <dansmith> cyeoh, russellb: it can go either way
21:46:39 <comstud> but ultimately in my testing, both suck compared to a simple native mysqldb implementationn
21:46:42 <shanewang> jog0: sad to hear that.
21:46:46 <dansmith> the port can be created ahead of time or by nova I think
21:46:58 * russellb sometimes laughs to himself that we use a language like Python and then get surprised sometimes with performance issues :-)
21:47:02 <melwitt> russellb: if a specific port isn't specified in the create request, one will be created by nova
21:47:13 <russellb> melwitt: cool
21:47:15 <comstud> russellb: IKR
21:47:21 <dansmith> wait
21:47:24 <dansmith> PYTHON IS SLOW?!
21:47:28 <russellb> IKR?
21:47:36 * dansmith heads to tweet the news
21:47:46 <jog0> :-)
21:47:57 <russellb> but if there's a clear usage issue we can change to get a significant performance improvement in a case like this, we should do it
21:48:03 <russellb> sounds like that may be the case here
21:48:04 <dansmith> yeah
21:48:47 <alaski> I'd like to discuss https://bugs.launchpad.net/nova/+bug/1212798 if there's time
21:48:48 <uvirtbot> Launchpad bug 1212798 in nova "quota_usages not decremented properly after per user quota migration" [Undecided,New]
21:49:19 <llu-laptop> hello, looking forward to reviews of this bp https://blueprints.launchpad.net/nova/+spec/utilization-aware-scheduling, it's about the generic framework to have compute manager reporting various metrics for scheduling purpose. Thanks.
21:49:20 <russellb> alaski: sure, can look at it now
21:49:22 <cyeoh> melwitt: so you're saying that if you want to set the security group on instance create then we expect the user to create their own port first and setup the security group first?
21:49:35 <alaski> I'm pretty sure I'm reading the quota code correctly for my analysis of that issue
21:49:44 <alaski> But what I don't know is the best approach to fix it
21:50:30 <alaski> the quota_usages table could be incorrect after that migration
21:50:39 <russellb> didn't per user quotas break the world last release too?  :(
21:50:41 <melwitt> cyeoh: no, nova will create the port and associate with instance so it gets network. then to apply security group, you query neutron to pull the port using device_id == instance uuid and then do an update port with security group you want
21:50:50 <alaski> russellb: that's what I've heard :(
21:51:16 <alaski> I was thinking of a new migration to force a resync of quota_usages
21:51:24 <comstud> yes they did
21:51:24 <russellb> yeah, that was my first thought
21:51:35 <comstud> and this looks somewhat familiar
21:51:36 <comstud> this bug
21:51:45 <russellb> and if this can't get worked out, then i'm not against ripping it out (again)
21:52:01 <alaski> there's code in the sqlalchemy api to resync quotas, but I don't know if I can call that from a migration script yet
21:52:06 <cyeoh> melwitt: isn't there a problem with a race there where for a while the the instance could have a security group you don't want?
21:52:14 <alaski> it may need to get rewritten to do the sync
21:53:08 <cyeoh> melwitt: or more precisely would not have a security group that you do want.
21:53:43 <russellb> alaski: well, sorry to see this is broken, sounds like you're on the right track for the fix IMO
21:54:02 <alaski> cool.  I'll try to migration approach
21:54:03 <russellb> but if you keep looking at feel like this is more fundamentally busted, please keep me updated
21:54:24 <alaski> kk.  data integrity is the only issue so far
21:54:44 <alaski> which should be fixable with a one time sync
21:55:10 <russellb> and migrations are fun right now
21:55:19 <russellb> with 10+ patches competing for a migration number
21:55:34 <russellb> kinda messy.
21:55:36 <dansmith> -ETOOMANYMIGRATIONS
21:55:45 <russellb> yar
21:55:46 <alaski> yeah, mikal seems to have scripted his -1s for conflicts
21:55:57 <russellb> yeah, he's done some cool work with db CI
21:56:21 <dansmith> whatever, I'll be impressed when he can make a "wah wah waaaah" sound during git-review for those
21:56:24 <melwitt> cyeoh: I guess that's true, without the instance create part the instance would start in the default group, become available, and then afterward the desired group could be added.
21:56:36 <jog0> I won migration 208
21:56:48 <russellb> should we had out migration trophies?
21:56:56 <yjiang5> PCI has push migration from 197 to 209 now :)
21:57:05 <russellb> another code review system, reviewboard, gives out trophies for random things
21:57:12 <russellb> like, if your review # is a palindrome :-)
21:57:17 <melwitt> cyeoh: I wasn't thinking that would be an problem but I'm not sure the use case
21:57:18 <russellb> it's silly, and awesome.
21:57:22 <bnemec> russellb: +1
21:57:23 <shanewang> jog0: sad to hear that, which means we need rebase:)
21:57:28 <jog0> russellb: yes!
21:57:38 <shanewang> hi, I have one more question for bug https://bugs.launchpad.net/nova/+bug/1212428, besides removing orm, is that decided to convert the stat table into an id and a stats json? or needs more analysis.
21:57:39 <uvirtbot> Launchpad bug 1212428 in nova "compute_node_get_all slow as molasses" [Undecided,Confirmed]
21:58:32 <jog0> shanewang: thinking now is convert but there was a similar attempt before that didn't work so more thought may be needed
21:59:01 <shanewang> jog0: got you
22:00:04 <russellb> alright, looks like we're out of time for the meeting
22:00:07 <russellb> thanks everyone!
22:00:08 <NobodyCam> Good meeting, Thank you
22:00:13 <shanewang> thank you
22:00:16 <jog0> thanks russellb
22:00:19 <russellb> don't stress too much over the incoming deadlines :-)
22:00:29 <russellb> #endmeeting