14:02:32 <jorgem> #startmeeting neutron-lbaas
14:02:33 <openstack> Meeting started Thu Sep  4 14:02:32 2014 UTC and is due to finish in 60 minutes.  The chair is jorgem. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:02:34 <rm_mobile> Lol
14:02:34 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:02:35 <jorgem> there we go
14:02:36 <openstack> The meeting name has been set to 'neutron_lbaas'
14:02:38 <sbalukoff> Yep!
14:02:46 <sballe> cool!
14:02:49 <dougwig> no hypen x2
14:02:53 <jorgem> #chair dougwig
14:02:54 <openstack> Current chairs: dougwig jorgem
14:03:19 <sballe> What's the agenda? the agenda on the LBaaS wiki aren't updated
14:03:19 <jorgem> its been a while since I've been on time!
14:03:26 <jorgem> I don't think much
14:03:28 <dougwig> i think our only listed topic is blogan's driver status thing.
14:03:40 <jorgem> I'm guessing incubator status
14:03:43 <sbalukoff> And, if Kyle or Mark are here, an update on incubator
14:03:47 <sballe> perfect
14:03:48 <rm_mobile> So this might be a quick one?
14:03:51 <sbalukoff> jinx
14:04:00 <jorgem> rm_mobile: most likely
14:04:05 <sbalukoff> If only it were later in the day. XD
14:04:05 <sballe> blogan, can you resent the link to your driver?
14:04:10 <rm_mobile> K
14:04:21 <sballe> rm_mobile, it never is ;-)
14:04:26 <blogan> hello all
14:04:28 <rm_mobile> Lpl
14:04:29 <dougwig> #topic entity status and drivers
14:04:36 <rm_mobile> *lol
14:04:49 <dougwig> blogan: take it away.
14:04:55 <jorgem> blogan just got here
14:05:12 <blogan> sorry just got here, talking about my email?
14:05:17 <dougwig> yes
14:05:23 <blogan> oh okay
14:06:00 <blogan> so I just wanted to guage interest in whether people think that drivers being responsible for setting the status of the entities is an issue
14:06:16 <blogan> to me it is, because it leads to inconsistencies and really isn't something a driver should have to worry about
14:06:34 <sbalukoff> You said you had an idea on how to deal with asynchronous drivers
14:06:40 <sbalukoff> Did you want to share that?
14:07:01 <blogan> well it was an idea, but i haven't done much with it
14:07:25 <blogan> but basically it would  make the neutron lbaas API always asynchronous, and there would be an async driver interface, and a sync driver interface
14:08:01 <sbalukoff> How would the asynchronous driver interface work differently than the one now?
14:08:05 <blogan> I haven't worked it out totally because I'm not sure if anyone thinks it is worth investigating
14:08:12 <sbalukoff> (That seems like the crux of the problem, to me.)
14:08:18 <xgerman> I am also wondering what happened to our plan of using exceptions?
14:08:28 <sbalukoff> xgerman: +1
14:08:35 <sballe> xgerman, +1
14:08:43 <blogan> sbalukoff: actually it wouldn't be an interface it would be an abstract class that would handle the polling of the async driver methods to get the status
14:08:46 <a2hill> o/
14:08:54 <blogan> xgerman: what happened was having async drivers and sync driversr
14:09:07 <blogan> xgerman: since we have async drivers they can't throw exceptions
14:09:17 <blogan> xgerman: well they can it would just be uncaught
14:09:26 <sbalukoff> Hmmm...
14:09:29 <xgerman> if we are planning to poll the async driver will appear synchronous anyway
14:09:33 <jorgem> isn't that what the ERROR status is for?
14:09:45 <xgerman> unless you think we want to do event=ing
14:10:05 <sbalukoff> blogan: I suspect the people who should weigh in on that would be the authors of the asynchronous driver interfaces-- whether they would prefer things to work that way, or as they do now.
14:10:06 <dougwig> xgerman: even with sync, there's a wrinkle in that plane w.r.t. fatal vs non-fatal exceptions.  since you can get an LB object that triggers creating a bunch of child objects, you could end up in a halfway state.
14:10:33 <blogan> sbalukoff: yes and their input is what I was hoping to get today too
14:10:53 <sbalukoff> I don't suppose any of them are present?
14:11:03 <xgerman> dougwig, throwing exceptions doesn't a driver preclude from cleaning up
14:11:04 <sbalukoff> (I'm only seeing Octavia crew here that are active.)
14:11:06 <dougwig> in really really short, if we want to support auto-magic, even with synchronous, we need three exit results from driver interfaces, not just "had an exception" or "did not have an exception".
14:11:25 <xgerman> well, we could have different exceptions
14:11:31 <blogan> dougwig: we can have custom exceptions that will tell the plugin what to do, even complex exceptions
14:11:37 <xgerman> +1
14:11:40 <blogan> that would require more investigation though
14:11:55 <sballe> blogan, +1
14:11:59 <xgerman> I think a good error/exception model would be worth it
14:12:07 <dougwig> well, with our current models, you'd actually need to be able to communicate multiple errors at once, possibly.
14:12:08 <sballe> +10000
14:12:11 <blogan> are the radware guys here today?
14:12:22 <xgerman> dougwig, you can chain exceptions
14:12:23 <sbalukoff> samuel?
14:12:27 <blogan> dougwig: custom exception can have many fields to communicate that
14:12:44 <sbalukoff> I don't see Samuel. Avishay?
14:12:50 <sballe> Based on my past experience not putting enough time in getting a good error/exception model is an issue. it always comes back and bit
14:12:54 <evgenyf> blogan:I'm here
14:13:02 <sbalukoff> Oh yay!
14:13:03 <blogan> evgenyf: do you have an opinion on this?
14:13:04 <dougwig> yep.  i'm just saying that it's not as simple an interface as it sounds, which i suspect is why the statuses are in there.
14:13:22 <evgenyf> Are we talking LBaaS v1 or v2?
14:13:27 <blogan> I think the reason the statuses are there is because of async drivers not going through an agetn
14:13:38 <blogan> evgenyf: v2
14:14:16 <blogan> evgenyf: nothign should be added to v1
14:14:28 <evgenyf> I think it should remain as it is now, in v2 we have all these active/fail/defer functions in mixins
14:14:55 <blogan> evgenyf: true but the alternative is not having those at all
14:15:43 <evgenyf> blogan, can you elaborate on your idea with async plugin API please?
14:16:33 <evgenyf> sbalukoff: Sam and Avishay are not here
14:16:40 <blogan> evgenyf: basically the driver does not set the status of entities at all, it will just throw exceptions
14:17:06 <sbalukoff> Netsplit
14:17:08 <blogan> evgenyf: then the plugin sets those based on the exception thrown
14:17:09 <blogan> ah net split!
14:17:09 <sbalukoff> Awesome.
14:17:18 <jorgem> woah mass exodus
14:18:04 <blogan> evgenyf: how this is done to support both async and sync drivers I have an idea about but need more time to hash it out
14:18:10 <dougwig> we can prototype it, but given the multi-error issue, we're just going to replace one set of glue with another set that has to wrangle and chain the exceptions or setup the error fields before re-raising.  the bar here should be whether it results in simpler driver code.
14:18:29 <evgenyf> blogan: Radware's driver has separate thread dealing with operations success/fail
14:19:29 <blogan> dougwig: doesn't it make sense to have a separation of concerns though?  the driver only has to tell the plugin what entities are affected and then the plugin decides for all drivers what to do?
14:19:33 <blogan> that seems simpler to me
14:19:52 <blogan> evgenyf: correct and a solution to this problem would handle that
14:20:22 <sbalukoff> blogan: Are you talking about entities shared between drivers?
14:20:31 <sbalukoff> Does v2 in its current form allow that?
14:20:34 <blogan> sbalukoff: no not at all
14:20:39 <dougwig> if the separation results in one line per method becoming 3, then we're just re-arranging deck chairs.
14:21:00 <sbalukoff> Heh.
14:21:10 <blogan> we'd be putting deck chairs in the proper location
14:21:32 <dougwig> or even 1.  if it's truly a useless thing for drivers to be doing, we should be able to get rid of it and reduce driver code.  if that doesn't happen,w e're not really insulating anything.
14:21:32 <TrevorV> like on a deck
14:21:56 <xgerman> dougwig +1
14:22:02 <dougwig> i'm not sure that drivers updating status is "improper".
14:22:15 <blogan> maybe its a philosophical difference
14:22:22 <blogan> which is why i wanted people to give their opinions
14:22:24 <xgerman> it's not a good separation of concerns
14:22:26 <blogan> with a bit of a debate
14:22:42 <blogan> xgerman: i'm confused are you for or against it?
14:22:53 <xgerman> I am for exceptions
14:23:02 <sbalukoff> blogan: Is the concern that the current system leads to more inflexible spaghetti code?
14:23:35 <blogan> sbalukoff: i wouldn't say its spaghetti code, the driver interface dougwig creates does improve teh status management in the driver
14:23:44 <blogan> over v1
14:23:47 <xgerman> well, if everybody updates statuses then it's hard to make changes/updates without everybody changing code
14:24:01 <sballe> xgerman, +1
14:24:09 <sbalukoff> xgerman: That was what I was getting at.
14:24:30 <blogan> the main problems I have with it are that it will lead to inconsistent statuses across drivers, and to me it doesn't seem like the drivers' responsibility
14:24:35 <xgerman> good so we are at teh same page :-)
14:24:47 <blogan> and what xgerman said
14:25:04 <dougwig> xgerman: err, it's even harder if you pull them out, because now the driver can't make the decision of how to do transactions/locking, and you can have drivers running in parallel.  you can't wrap those operations at the plugin level.
14:25:39 <dougwig> (the complex ones, not the status ones)
14:25:58 <blogan> to me it seems odd the driver has to worry about transactions/locking when it should be the db layer that does
14:26:20 <dougwig> it's part and parcel of having two sources of truth.
14:26:21 <blogan> unless you're talking about not the lbaas db
14:27:06 <blogan> two soruces of truth being the lbaas db and the driver's own db?
14:27:33 <blogan> or whatever the vendor's storage mechanism is
14:27:39 <dougwig> correct.  we don't have things setup for atomic replacements of entire trees.
14:28:20 <dougwig> which means, hello hard cs problem #2.
14:28:28 <blogan> naming things?
14:28:30 <blogan> lol
14:28:35 <sbalukoff> Haha
14:28:47 <dougwig> i knew you would reply with that.  :)
14:29:13 <blogan> can you give a specific example dougwig? im not totally following?
14:29:26 <blogan> that last sentence should just be a statement
14:30:41 <dougwig> can we talk later in channel?  it's way early, and this is going to take awhile.
14:30:46 <blogan> lols ure
14:30:55 <blogan> its not mean to be solved today at all
14:31:11 <sballe> dougwig, please pin me if I am available. Would love to be part of that chat
14:31:15 <sballe> s/ping
14:31:19 <dougwig> sballe: ok
14:31:39 <blogan> okay so that will be sidelined until later today
14:31:47 <sbalukoff> Sounds good.
14:32:03 <blogan> anyone have anything else?
14:32:11 <sballe> Is mestery on the IRC?
14:32:13 <sbalukoff> Are markmcclain or mestery here?
14:32:16 <sballe> lol
14:32:24 <sbalukoff> :)
14:32:26 <mestery> sballe sbalukoff: o/
14:32:31 <evgenyf> Dougwig: could you please summarize your conversation results in ML after, please
14:32:32 <dougwig> #topic incubator update
14:32:37 <dougwig> evgenyf: yes
14:32:40 <blogan> mestery: the obligatory ask for an update on the incubator
14:32:45 <mestery> blogan: Absolutely sir!
14:32:45 <sbalukoff> :D
14:32:51 <dougwig> i can take this one.
14:32:54 <dougwig> it should be done this week.
14:32:57 <dougwig> right?  :)
14:32:58 <blogan> lol
14:33:02 <mestery> So, the update is that now that we're past Juno FF, we will get infra to set the repository up by tomorrow.
14:33:10 <mestery> We were holding off given their focus on holding hte gate togehter this week
14:33:22 <mestery> markmcclain has worked with the TC and infra on the plan and they are both on board.
14:33:31 <mestery> Any questions?
14:33:36 <mestery> :)
14:33:45 <blogan> mestery: by tomorrow do you mean two weeks from now? or actually tomorrow?
14:33:50 <sbalukoff> Heh!
14:33:51 <mestery> blogan: :P
14:33:51 <jorgem> Is there a document with info somewhere?
14:33:51 <sballe> I would like to understand a little more abotu the governance of the incubator proect
14:34:02 <mestery> Tomorrow is the plan, markmcclain should have the review out today for repo creation
14:34:06 <jorgem> sballe: me too
14:34:12 <sbalukoff> mestery: Any changes in how incubator is going to be run, or is the wiki still the source of truth on this?
14:34:15 <sballe> Based on the ML discussion we had there is a lot of confusion
14:34:21 <mestery> I think the governance should be documented on the wiki, let me find the link.
14:34:32 <mestery> sbalukoff: The wiki is still the source of truth at this point
14:34:33 <blogan> mestery: can you make sure it gets put on the ML so we can all look at it?
14:35:03 <mestery> blogan: Ack, will do
14:35:15 <blogan> mestery: thanks a bunch
14:35:15 <markmcclain> the wiki has pending revisions that I have not posted to clarify feature-branch vs incubator criteria
14:35:36 <dougwig> #link https://wiki.openstack.org/wiki/Network/Incubator
14:35:39 <sballe> mestery, Do we have any timeline for the first project to enter the Neutron incubator project
14:35:40 <xgerman> ok, can you send an e-mail once we are supposed to look
14:36:15 <mestery> sballe: Once it's up, you can post blogan's patch series there right away.
14:36:41 <markmcclain> sballe, mestery: I'll ping blogan when it is ready and help get the patches in
14:36:50 <mestery> markmcclain: Awesome sir!
14:36:52 <xgerman> awesome, thanks!
14:37:27 <dougwig> is devstack getting modified at the same time?
14:37:56 <markmcclain> dougwig: not sureā€¦ I have work w/QA team
14:38:04 <sballe> mestery, I am looking forward to seeing this whole Neutron Incubator project working and hoepfully it will work well. We are kind of counting on this to be able to move forward
14:38:21 <markmcclain> the gate is still seriously backed up
14:38:22 <sballe> We == HP
14:38:36 <dougwig> the gate needs some fiber, for sure.
14:38:37 <mestery> sballe: ++, I agree, I expect this to really be super helpful as well!
14:38:54 * mestery feeds the gate some whole grain
14:39:06 <markmcclain> haha
14:39:07 <blogan> sballe: i think everyone wants to see it succeed
14:39:32 <sbalukoff> Yep
14:39:37 <sballe> mestery, you should give it some Red Bull instead
14:39:52 <xgerman> I think we just bring in Chuck Norris as a "gate opener"
14:40:09 <mestery> sballe: hahhahahahaha
14:40:10 <dougwig> it's only 72 deep today.
14:40:15 <dougwig> that's light and breezy
14:40:46 <dougwig> any other incubator questions or updates?
14:41:03 <sballe> I would like to have this topic on the agenda for next week again.
14:41:13 <rm_work> sballe: +1
14:41:13 <dougwig> ok
14:41:16 <sballe> We need a weekly status on how this is working for us
14:41:25 <sbalukoff> sballe: +1
14:41:32 <xgerman> sballe +1
14:41:35 <dougwig> mestery, markmcclain - thanks for the update
14:41:37 <mestery> sballe: Lets use the neutron meeting for that
14:41:45 <sballe> ok when is that?
14:41:47 <mestery> I'd like the broader team to hear the update on the incubator as well
14:41:58 <mestery> #link https://wiki.openstack.org/wiki/Meetings#Neutron_team_meeting
14:42:02 <rm_work> dougwig: I have a topic when we wrap up the incubator topic and whatever else was on the official agenda
14:42:30 <dougwig> #topic rm_work's topic
14:42:35 <rm_work> heh
14:43:02 <a2hill> wonderful topic
14:43:06 <sballe> mestery, when is the next Neutron meeting Monday or Tiesday: Mondays at 2100 UTC and Tuesdays at 1400 UT
14:43:08 <rm_work> So, for TLS, we are "registering" with Barbican to get the user's certificate data
14:43:20 <mestery> Tuesday
14:43:22 <sballe> thx
14:43:30 <rm_work> to do this, we need to auth with barbican using our own keystone user that is an "admin"
14:43:31 <mestery> sballe: I sent email on this, and if you go here (https://wiki.openstack.org/wiki/Network/Meetings) you can see it
14:43:52 <rm_work> I don't know if there is precedent for Neutron having its own "service user". Is there? how would this be handled?
14:44:15 <xgerman> that should potentially be the same as us needign a nova user for Octavia
14:44:17 <a2hill> or maybe itll be an 'operator' admin user?
14:44:45 <rm_work> xgerman: yeah, i think very similar
14:45:07 <xgerman> I know security looks down on passwords stored in config files
14:45:19 <dougwig> i'm thinking that there must be a standard "openstack" way of handling this.  we can't be the first to need a backdoor account into another project.
14:45:26 <sballe> xgerman, you can have a Octavia tenant with admin privs for the variosu services or advsrv role in the case of Neutron
14:45:31 <rm_work> Yeah... not sure how else to handle it -- we can't exactly store the password in Barbican <_<
14:45:58 <sballe> we do this all the time wirh our platform services
14:46:24 <rm_work> right, so we just need a tenant with a couple specific keystone roles, and marked as "admin" (whatever that means exactly to Keystone, i'm still not 100% clear)
14:46:44 <xgerman> well, name, pwd, etc. should all be configurable
14:46:45 <sballe> rm_work, yeah and we migth not even need admin privs.
14:46:48 <rm_work> So, I think this needs to be owned by Neutron, not Neutron-Lbaas, since the code that uses it will be in /common/
14:46:54 <sballe> it will depend on whta we need to do
14:47:36 <rm_work> well, I am currently very concerned with security on this, and I am going to need to discuss with Barbican about how we'll do this
14:47:45 <blogan> rm_work really just needs to know if neutron alreayd has one set up for this or if one will have to be created
14:47:56 <dougwig> maybe an ML query on how other projects have done this?
14:48:00 <sballe> blogan, can you elaborate?
14:48:03 <blogan> markmcclain: does neutron use service accounts to communicate with other openstack projects?
14:48:05 <sbalukoff> rm_work: Could ask on the ML to see if anyone there knows the "OpenStack" way of doing this.
14:48:10 <xgerman> mestery?
14:48:15 <sbalukoff> dougwig: Jinx
14:48:51 <sballe> blogan, I am not sure what yu are asking about?
14:49:11 <dougwig> blogan: it does, for nova.
14:49:25 <dougwig> i think you put endpoint/user/pass into neutron.conf
14:49:30 <rm_work> ok
14:49:31 <xgerman> ok, then use that one?
14:49:34 <blogan> sballe: if neutron already has a service account set up that can talk to barbican, really
14:49:34 <rm_work> so we could hook into that
14:49:41 <a2hill> Yea, we do store other 'secure' info in the configs
14:49:43 <sballe> we need to check this
14:49:45 <blogan> xgerman: maybe but not sure if it also can talk to barbican
14:49:48 <a2hill> not sure why this would be much different
14:49:54 <blogan> will require some testing for sure
14:49:54 <dougwig> https://www.irccloud.com/pastebin/bvLCzij1
14:49:56 <markmcclain> blogan: yes a service account in the recommended way to deploy
14:50:02 <rm_work> I mean, if it's a keystone tenant, then it's just a role issue
14:50:25 <sballe> rm_work, That's what I am getting at... It all depend on wht we want to do
14:50:25 <blogan> markmcclain: so if we need to retrieve keys from barbican, is there already a service account we can use?
14:50:43 <blogan> markmcclain: or is that something a deployer does themselves and puts in the config?
14:51:13 <markmcclain> yes.. there should be credentials that we use for hte Nova callback
14:51:20 <rm_work> though I am still thinking maybe we want to hijack the user's token to set up the original trust -- just because maybe allowing the service-account access to literally anyone's data in Barbican is a little scary
14:51:48 <rm_work> I may be getting into the weeds here, but I would like people to have some idea what's going to be happening in the background with regard to this
14:52:03 <rm_work> so if there are concerns with the security of the whole thing, they can be voiced
14:52:24 <sbalukoff> It sounds like storing credentials in a config file is no worse than current practice.
14:52:24 <rm_work> and people with more security experience than myself can chime in :)
14:52:24 <sballe> rm_work, there is something in keysone called Trusts maybe taht would be useful. It allows a service to do somethign as a user
14:52:25 <markmcclain> rm_work: I agree wanting to limit trust
14:52:35 <markmcclain> so you could reuse the current context
14:52:48 <rm_work> sballe: right, the plan is that we set up a trust between our service account to the user
14:52:59 <rm_work> but allowing the service-account to actually set up a trust with any user is a bit risky
14:53:13 <markmcclain> but we'll need to be careful that something else in the callstack has not elevated the privileges
14:53:17 <rm_work> rather, maybe would be good to as markmcclain is saying, re-use the user's original context to initiate the trust the first time
14:53:35 <rm_work> then rely on the trust from then on
14:53:41 <blogan> doesn't this have implications for octavia too?
14:53:43 <sballe> agreed
14:53:46 <rm_work> blogan: yes
14:53:50 <sbalukoff> blogan: Yes.
14:53:54 <sbalukoff> Same problem, really.
14:53:56 <dougwig> what about later, when you need to do maintenance on the LB, and need to re-fetch the cert.  user context will be gone.
14:54:07 <rm_work> dougwig: but at that point the Trust will be set
14:54:15 <rm_work> so the service account has access to the user's Barbican data
14:54:22 <dougwig> ahh, i see.  do trust's exist in keystone yet?
14:54:54 <dougwig> (or are they at the barbican level?)
14:54:59 <sbalukoff> Well, again, there is also the problem of the ticking-time-bomb if the user nukes his barbican data and Octavia / LBaaS needs to access it afterward.
14:55:02 <rm_work> dougwig: yes
14:55:07 <rm_work> dougwig: they are in keystone now
14:55:19 <rm_work> sbalukoff: at some point we can't protect them :/
14:55:20 <sballe> https://wiki.openstack.org/wiki/Keystone/Trusts
14:55:20 <xgerman> sbalukoff, let's not open that can of worms
14:55:27 <rm_work> sbalukoff: if they go in and remove our trust... tough :/
14:55:34 <dougwig> #link https://wiki.openstack.org/wiki/Keystone/Trusts
14:55:49 <sbalukoff> We already discussed this, and I think the solution was to have Octavia store its own copy of the secrets.
14:55:50 <sbalukoff> :/
14:55:59 <sbalukoff> That's where I recall the discussion went last time.
14:56:03 <dougwig> not octavia, just haproxy.
14:56:14 <dougwig> but heck, i don't remember the final resolution.
14:56:14 <rm_work> sbalukoff: right, for Octavia that's the case
14:56:19 <rm_work> not for Neutron-Lbaas
14:56:23 <sbalukoff> dougwig: Right. Because vendors keep their own copy of the secrets anyway.
14:56:34 <rm_work> *Octavia* would copy the keys into its own Barbican store
14:56:35 <sbalukoff> Yep.
14:56:41 <sballe> +1
14:56:47 <rm_work> but for Neutron-Lbaas, we're relying on the original
14:56:55 <sbalukoff> rm_work: True.
14:56:57 <rm_work> thus Consumer Registrartion
14:57:01 <rm_work> *Registration
14:57:23 <rm_work> anyway, I'll work up an email with a better overview and maybe some pictures / diagrams :P
14:57:28 <rm_work> since we are just about out of time
14:57:32 <sbalukoff> Fancy!
14:57:34 <xgerman> rm_work +1
14:57:47 <sballe> +1
14:58:02 <dougwig> #action rm_work send writeup on barbican/trust stuff to ML
14:58:07 <dougwig> #topic Open discussion
14:58:29 <rm_work> open discussion for 2 minutes GO
14:58:39 <sbalukoff> Ok, going back to bed for a half hour. See you at 10:30, sballe!
14:58:45 <rm_work> hah sbalukoff
14:58:50 <blogan> enjoy
14:58:51 <sballe> sbalukoff, Perfect.
14:58:53 <rm_work> that's a great plan, too bad i have sprint planning <_<
14:58:53 <dougwig> rm_work: in your writeup, remember that we need to call out to barbican for every SSL negotiation.  people loved that.
14:58:57 <jorgem> see you guys
14:59:07 <sballe> bye
14:59:09 <jorgem> got a meeting to attend
14:59:13 <xgerman> bye
14:59:14 <dougwig> jk
14:59:15 <dougwig> later
14:59:20 <rm_work> dougwig: hahahaha no.
14:59:21 <rm_work> T_T
14:59:31 <rm_work> I remember that though >_<
14:59:38 <rm_work> \o
14:59:42 <dougwig> #endmeeting