22:00:21 <armax> #startmeeting neutron_drivers
22:00:22 <openstack> Meeting started Thu Jul 21 22:00:21 2016 UTC and is due to finish in 60 minutes.  The chair is armax. Information about MeetBot at http://wiki.debian.org/MeetBot.
22:00:23 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
22:00:25 <openstack> The meeting name has been set to 'neutron_drivers'
22:00:35 <kevinbenton> hi
22:00:44 <ihrachys> o/
22:00:59 <armax> hello everyone, thanks for joining
22:01:20 <ajo> :)
22:01:25 <armax> I don’t have a special reminder for drivers folks so let’s dive in
22:01:27 <johnsom> o/
22:01:30 <armax> #link https://bugs.launchpad.net/neutron/+bugs?field.status%3Alist=Triaged&field.tag=rfe&orderby=datecreated&start=0
22:01:55 <armax> bug #1575146
22:01:55 <openstack> bug 1575146 in neutron "[RFE] ovs port status should the same as physnet." [Wishlist,Triaged] https://launchpad.net/bugs/1575146
22:02:02 <armax> anyone had a chance to navigate it?
22:02:35 <njohnston> o/
22:02:38 <dougwig> is this going to affect the scalability of the agents?
22:03:08 <kevinbenton> i don't think scalability should be a big issue
22:03:15 <kevinbenton> since it's local to agents
22:03:23 <kevinbenton> and it's just watching the status of one interface
22:03:23 <armax> it depends how it’s implemented  :)
22:03:31 <ajo> is it talking about setting port's as down when a network is configured as down by admin?
22:03:34 <ajo> or about monitoring the network?
22:03:36 <dougwig> because we're just scanning physicals?  hmm.  i'd hate to make a claim that vxlan was UP, when it has way more moving parts.
22:03:43 <kevinbenton> monitoring status of physicals
22:03:52 <kevinbenton> ajo: not logical model update from what i can see
22:04:02 <armax> this is about monitoring link status on the host
22:04:09 <dougwig> but better than the random DOWN nonsense we have today
22:04:35 <kevinbenton> dougwig: well that field is about what is wired
22:04:40 <ajo> I guess it's doable, tunnel networks being the complicated part
22:04:49 <armax> and if the interface affected ends up being used by the agent reflect the state on the affected logical ports
22:04:53 <kevinbenton> dougwig: config state vs dataplane
22:05:16 <kevinbenton> armax: well i don't think this is even updating the logical port on the server
22:05:18 <ihrachys> is current neutron api enough to build a monitoring tool? we expose phys net relationship per network, so that should be enough for an external tool to indicate which ports are affected by a failure on infra.
22:05:26 <kevinbenton> armax: its just setting the tap device to down
22:05:28 <kevinbenton> isn't it?
22:05:39 <kevinbenton> ip link set tap238947ac down
22:05:55 <kevinbenton> no interaction with neutron server from what i can tell
22:06:02 <armax> kevinbenton: and what then?
22:06:13 <ihrachys> kevinbenton: when you do it from hypervisor, how does it affect guest?
22:06:14 <kevinbenton> armax: the VM sees its interface state change
22:06:34 <ajo> hm, that'd be nice,
22:06:34 <kevinbenton> armax: so the VM can have a failover internally to another interface
22:06:49 <kevinbenton> I haven't tested this, but it's what I understand the request is for
22:07:00 <armax> kevinbenton: right, but wouldn’t we want to reflect that all the way to the neutron logical port?
22:07:03 <ajo> but shouldn't we, in that case, report back to neutron-server to set the ports as down ?
22:07:17 <ajo> exactly what armax said :P
22:08:00 <kevinbenton> that's possible
22:08:08 <kevinbenton> would make sense for API visibility
22:08:16 <kevinbenton> but I don't think it's the core component of the RFE
22:08:18 <armax> kevinbenton: otherwise we’d still have physical != logical
22:08:26 <kevinbenton> armax: right
22:08:27 <armax> as far as state goes
22:08:41 <armax> if we limit this to a host local thing
22:09:10 <armax> then I could see it being a neat enanchement
22:09:18 <armax> the trick is in how to reliably detect the link failure
22:09:28 <ajo> do we have up/down indication on the port model? I guess we're not talking about admin-state-up
22:09:35 <ihrachys> btw, let's say you have physnet broken; but you still have connectivity between instances on the same node; is it fair to shut down the port completely?
22:10:07 <kevinbenton> ihrachys: that's a good concern
22:10:11 <dougwig> it'll be funny if the taps reflect the physical, but ovs always has the bridges marked down (which started happening in liberty for some reason)
22:10:30 <kevinbenton> another issue is that the physnet bridge may have multiple physical interfaces plugged into it
22:10:39 <kevinbenton> what do we do if one fails?
22:10:43 <kevinbenton> but the other is active
22:10:55 <ajo> good concerns
22:11:03 <amotoki> in physical server case, we don't get a port down when an uplink of a switch is down.
22:11:13 <armax> kevinbenton: right, this may become too deployment dependent
22:11:18 <kevinbenton> amotoki: ++
22:11:24 <kevinbenton> this wouldn't detect other topology failures
22:11:24 <ajo> amotoki++
22:11:46 <ihrachys> I honestly think it's a job for some external tool that would be able to 1) monitor physnet state; 2) talk to guests to do orchestration
22:12:05 <kevinbenton> yeah, i'm starting to agree
22:12:08 <armax> ihrachys: actually, now that you brought this up
22:12:16 <kevinbenton> because there are too many different things that an operator might want to watch for
22:12:21 <armax> there’s also this other bug which is relevant to this discussion
22:12:26 <ajo> ihrachys, or related to the debugging scenarios hynek was proposing
22:12:31 <cgoncalves> armax: :-)
22:12:35 <armax> bug #1598081
22:12:35 <openstack> bug 1598081 in neutron "[RFE] Port status update" [Wishlist,Triaged] https://launchpad.net/bugs/1598081
22:13:03 <ajo> ahaa
22:13:12 <armax> if we assume that out of band tools can indeed cooperate with Neutron to manage a port state
22:13:33 <armax> perhaps we do need to relax the existing API and allow third party to set the status of a logical port
22:14:06 <armax> either that or, as one alternative being proposed, introduced a new state for this specific need
22:14:10 <kevinbenton> yeah, i'm more inclined to approve this one
22:14:19 <armax> 1598081?
22:14:20 <kevinbenton> because it would allow tooling to do this
22:14:22 <kevinbenton> yeah
22:14:25 <ihrachys> +
22:14:31 <armax> kevinbenton: you mean flipping the allowed_put on port status?
22:14:32 <ajo> may be even extend the port status beyond a single constant (providing more details of the status issue)
22:14:47 <kevinbenton> armax: yeah, or some API mechanism
22:14:56 <kevinbenton> maybe a new field
22:15:01 <kevinbenton> for dataplane status
22:15:10 <ihrachys> well, I thought it won't be rest exposed; only from inside ml2 drivers?
22:15:20 <cgoncalves> kevinbenton: like force-down as nova has not for hosts?
22:15:36 <kevinbenton> there is a lot of logic tied to that status field now, i'm not sure allowing arbitrary changes of STATUS will play well with ML2
22:15:42 <armax> kevinbenton: a new field might be better in case some plugins would not tolerate the change in allow_put to True?
22:15:50 <kevinbenton> armax: yes
22:16:05 <kevinbenton> cgoncalves: what does force-down do?
22:16:24 <ajo> dataplane-status ?
22:16:33 <kevinbenton> ajo: yeah, i'm thinking something like that
22:16:42 <cgoncalves> kevinbenton: overwrites 'status'
22:17:05 <kevinbenton> cgoncalves: ah, yeah i'm not sure forcing status changes will work well with ML2
22:17:07 <dougwig> i'd think the ml2 driver or core plugin should get to decide whether a 3rd party gets to muck with your port state.
22:17:21 <kevinbenton> cgoncalves: its likely to come along and undo the status on an agent sync
22:17:30 <armax> ok let’s report back on 1575146 based on this discussion and see if that would solve their need
22:17:41 <armax> in the meantime we can figure out 1598081 on a spec
22:17:45 <cgoncalves> kevinbenton: I mean, not the 'status' db value but REST API
22:18:17 <armax> cgoncalves: it’s probably safer to start putting something into a spec format
22:18:20 <kevinbenton> cgoncalves: even that is tied to nova notifications
22:18:33 <kevinbenton> cgoncalves: would a new field not work for your use case?
22:18:42 <armax> any other opinon on bug 1598081?
22:18:42 <openstack> bug 1598081 in neutron "[RFE] Port status update" [Wishlist,Triaged] https://launchpad.net/bugs/1598081
22:19:08 <kevinbenton> armax: yes. if the second goes in, then we can partially address this by having tap status reflect the dataplane status
22:19:20 <kevinbenton> armax: so then it's just up to a tool to set that status
22:19:27 <armax> kevinbenton: right, let’s circle back on the former RFE and take it from there
22:19:30 <cgoncalves> kevinbenton: it would address half of the issue, yes
22:19:46 <kevinbenton> cgoncalves: what's the other half that it leaves out?
22:20:02 <cgoncalves> armax: spec sounds good. question is with which approach should we propose firstly
22:20:40 <armax> cgoncalves: I think the approach that explores a new status field is probably the one with the best chances
22:20:42 <cgoncalves> kevinbenton: SDN controllers reporting through their existing APIs up to the mech driver
22:20:48 <cgoncalves> armax: ok
22:20:56 <kevinbenton> cgoncalves: oh, that's not a big deal if we have a new field
22:21:07 <kevinbenton> cgoncalves: they could just use the regular update_port core plugin api at that point
22:21:24 <cgoncalves> kevinbenton: sure
22:21:25 <ihrachys> armax: no PUT for the start?
22:21:33 <kevinbenton> +1 to no PUT
22:21:47 <kevinbenton> and we can even have this new status force the old status to DOWN as well
22:21:48 <ihrachys> yeah, + for no PUT. we can reiterate later.
22:21:49 <armax> ihrachys: changing the semantic of PUT may not go down well for all plugins
22:22:00 <cgoncalves> so we are saying a new field for out of band, while for in band would not need additional work from neutron core, right?
22:22:12 <armax> but during the spec review we can find out potentially
22:22:31 <armax> cgoncalves: yes
22:22:36 <ihrachys> it makes me think... isn't it in a way duplicating the /health api that Hynek is working on?
22:22:46 <cgoncalves> armax: sounds good
22:22:58 <kevinbenton> ihrachys: well this is setting a state from the API
22:23:05 <kevinbenton> ihrachys: the /health would look at this i assume as well
22:23:08 <armax> ihrachys: that’s addressing a different use case
22:23:13 <cgoncalves> ihrachys: do you have a pointer at hand?
22:23:21 <armax> I see /health as resource status on steroid
22:23:21 <armax> s
22:23:33 <ihrachys> cgoncalves: https://review.openstack.org/308973
22:23:41 <cgoncalves> ihrachys: thanks
22:23:56 <kevinbenton> i see /health as reading the status of everything and this new field as a way for plugins to say something is broken
22:24:02 <kevinbenton> at the dataplane somewhere
22:24:06 <armax> ihrachys: now the diagnostics framework could potentially include link status checking
22:24:31 <armax> ihrachys: but we’d never go for that in-tree
22:24:46 <ajo> yeah, they can be complementary, or build on each other
22:25:05 <armax> ihrachys: now as for the level of pluggability of the diagnostics framework, that is still TBD
22:25:26 <armax> questions? notes? shall we move on?
22:25:40 <ihrachys> not sure I understand why we would not go with at least a check model for link status, but ok. we can probably move on.
22:27:00 <armax> ihrachys: please do capture your thoughts on the relevant bug reports
22:27:16 <ihrachys> armax: will do
22:27:18 <armax> ihrachys: I suppose anything is possible
22:27:57 <ajo> (time check 30min, 7 RFEs left)
22:28:00 <ajo> :-)
22:28:07 <armax> bug 1580880
22:28:07 <openstack> bug 1580880 in neutron "[RFE] Distributed Portbinding for all port types" [Wishlist,Triaged] https://launchpad.net/bugs/1580880 - Assigned to Andreas Scheuring (andreas-scheuring)
22:28:18 <armax> carl_baldwin: ping
22:28:24 <carl_baldwin> o/
22:28:48 <carl_baldwin> We talked about this at the Nova mid-cycle.  johnthetubaguy is taking some interest from the Nova side.
22:28:48 <armax> carl_baldwin: anything worth sharing about this?
22:29:20 <carl_baldwin> I personally think that this ought to be driven from the Nova side for live migration.  We should prioritize it to match theirs.
22:29:36 <armax> carl_baldwin: so we need to figure out shape and scope, but it’s something that’s in Nova’s hands?
22:29:58 <armax> carl_baldwin: any other Nova developer willing to sponsor?
22:30:38 <carl_baldwin> No one spoke up willing to sponsor but there was general interest like it was something that they'd like to fix.
22:30:56 <carl_baldwin> They have a similar issue with Cinder and they'd like to see what similarities there are.
22:31:04 <armax> at this point we have the option of marking this postponed and tackle as best effort
22:31:12 <armax> the Newton window is shut for them anyway
22:31:23 <carl_baldwin> The current goal is for John, Paul Murray, Andreas, and me to get a plan ready for summit.
22:31:36 <armax> so we can take the time to iterate on the spec and revisit as soon as Ocata opens up?
22:31:42 <carl_baldwin> Yes.
22:31:44 <armax> carl_baldwin: ok
22:31:51 <armax> I did look at the spec already
22:31:59 <armax> let’s continue the tango
22:32:06 <carl_baldwin> I read through it to.  I think it is getting better.
22:32:09 <armax> moving on?
22:32:10 <carl_baldwin> *too
22:32:15 <carl_baldwin> Yes, move on.
22:32:18 <armax> bug 1583694
22:32:18 <openstack> bug 1583694 in neutron "[RFE] DVR support for Allowed_address_pair port that are bound to multiple ACTIVE VM ports" [Wishlist,Triaged] https://launchpad.net/bugs/1583694 - Assigned to Swaminathan Vasudevan (swaminathan-vasudevan)
22:33:22 <armax> As for this one, last week we agreed we wanted to explore more formal ways to describe the particular nature of the Floating IP for the use case in which multiple ports involved are needed
22:33:40 <carl_baldwin> I thought about this a little bit too.  So far, I can't convince myself that a new top-level resource is needed but I don't feel strongly.
22:33:47 <armax> we’ll keep this on the backburner until we have a new proposal to look at, at this point I feel this probably as come in teh form of a spec?
22:33:58 <carl_baldwin> ++
22:33:59 <armax> carl_baldwin: right, I tend to agree to
22:34:11 <armax> too*
22:34:34 * carl_baldwin 's and armando's double-o keys don't seem to be working today.
22:34:35 <armax> but the existing model/API experience can be streamlined
22:35:21 <armax> carl_baldwin: would you still agree with this last statement?
22:35:26 <carl_baldwin> yes
22:35:47 <armax> ok
22:36:00 <armax> bug 1586056
22:36:00 <openstack> bug 1586056 in neutron "[RFE] Improved validation mechanism for QoS rules with port types" [Wishlist,Triaged] https://launchpad.net/bugs/1586056 - Assigned to Slawek Kaplonski (slaweq)
22:36:25 <ajo> \o/
22:36:35 <armax> ajo, ihrachys are you saying that this turns out to be a simple bug fix?
22:36:48 <armax> ‘simple’?
22:37:17 <ajo> well, not a simple bugfix, the current behaviour could have been considered a bug, may be
22:37:35 <armax> ajo: is there a patch in the works?
22:37:37 <ajo> I prefer we actually track it by RFE, we even have a short spec describing the work to be done
22:37:43 <ajo> armax, yes, 1 sec
22:38:01 <ihrachys> I think it's a rather straightforward fix, though building it cleanly will require some thought. it changes behaviour for supported rule types API, so it's a RFE.
22:38:03 <armax> https://review.openstack.org/#/c/328655/?
22:38:14 <ajo> https://review.openstack.org/#/c/319694/
22:38:33 <armax> oh boy you got a -1 from garyk
22:38:39 <ajo> :P :)
22:38:45 <ihrachys> :D
22:39:00 <armax> ok +794,-35
22:39:17 <ajo> it's contained in a way that it's only activated via callbacks if qos is enabled
22:39:18 <kevinbenton> whooo, it deletets 35 lines! :)
22:39:23 <ajo> cleanup! ;)
22:39:49 <armax> ajo: and you still want a spec?
22:40:20 <ajo> the spec seems to be fine, we used it to agree on the high level details of the implementation
22:40:32 <armax> ajo: is there a pending spec too?
22:40:38 <ajo> 1 sec
22:40:38 <ihrachys> https://review.openstack.org/#/c/323474/
22:40:43 <ihrachys> the spec ^
22:40:50 <ajo> correct
22:41:02 <ihrachys> it's fine. I don't insist on having one, but since it's already there...
22:41:25 <armax> ihrachys: ok, it seems most of the legwork is done
22:41:28 <ajo> basically we're trying to conciliate the heterogeneity of a deployment (different vnic types, different port bindings... with different capabilities)
22:41:41 <armax> ajo: but you did it with no api changes?
22:41:44 <ajo> and tell the admin when it's going to do something that does not work
22:41:54 <ajo> armax, correct, we will only forbid things that don't work
22:41:56 * armax must read it to learn how they pulled that off
22:42:06 <armax> ok
22:42:17 <ajo> like trying to set a policy not compatible with an SR-IOV port
22:42:35 <ihrachys> no api changes, that was the original concern; now it's properly isolated in scope.
22:42:44 <ajo> or trying to change a policy in a way that it becomes incompatible to a bound port
22:42:55 <amotoki> i think error conditions in API will be changed. correct?
22:43:13 <ajo> amotoki, we will provide more error conditions (conflict probably)
22:43:18 <ajo> and document them
22:43:30 <ajo> but no parameters changed, or REST methods added
22:44:02 <armax> ok, I can’t see why we can’t proceed with this one, I’ll look at the outstanding patches
22:44:18 <armax> I assume that both of you can take care of this in time for Newton?
22:44:46 <ihrachys> I will trade reviews for that for some of my pending patches... :)
22:44:57 <ajo> thanks, yes, I hope we can get it done for newton
22:45:03 <armax> ihrachys: it doesn’t work like that, but nice try
22:45:03 <ajo> hehe, that will be welcomed :P :)
22:45:04 <armax> :)
22:45:19 <ihrachys> damn!
22:45:22 <ajo> I thought the trade offer was to me :P
22:45:23 <ihrachys> let's move on
22:45:26 <armax> ok
22:45:27 <ihrachys> ajo: it was
22:45:29 <armax> bug 1592000
22:45:29 <openstack> bug 1592000 in neutron "[RFE] Admin customized default security-group" [Wishlist,Triaged] https://launchpad.net/bugs/1592000 - Assigned to Roey Chen (roeyc)
22:46:08 <ihrachys> I think it inherently makes openstack less compatible
22:46:19 <ihrachys> however we implement or expose the feature.
22:46:20 <armax> I suppose ihrachys’ comment was the nail in the coffin
22:46:39 <ihrachys> since the contents of the group is a contract for a long time.
22:46:41 <armax> I have been pushing back on this myself
22:47:00 <kevinbenton> nova allowed this right?
22:47:17 <armax> kevinbenton: allegedly
22:47:30 <armax> kevinbenton: though I am not sure if they ever removed the mechanism after juno
22:48:00 <armax> so right now we are 2 -2
22:48:16 <ihrachys> well, in a way, one -2 and one -1
22:48:25 <armax> anyone is willing to argue against the non favorable votes?
22:48:30 <armax> ihrachys: same difference
22:48:43 <ajo> well, the incompatibility is a matter of people getting used to properly setup the default SG
22:48:48 <armax> what other folks reckon?
22:49:06 <ajo> I believe there's a good use case when admins want to setup a higher level of security
22:49:29 <ajo> it's not the first time I heard that from an operator, but they didn't insist too much, they had more pressing things
22:49:31 <armax> ajo: bear in mind that this is somewhat already possible
22:49:41 <amuller> armax: It is?
22:49:44 <ajo> armax, what do you mean?
22:49:46 <dougwig> the current nonsense is bullshit that leads to literally every new tenant sending a request, "i can't ping my instances!", so everyone scripts a tenant create with their own default anyway.
22:49:51 <armax> tenant onboarding
22:49:59 <armax> you create a default security group with your junk in it
22:50:10 <ajo> that's true
22:50:26 <armax> but do we want to give the admin the rope?
22:50:29 <armax> I’d rather not
22:50:46 <armax> I’d rather send the opposite signal
22:51:00 <armax> dougwig: those tenants should RTFM
22:51:15 <dougwig> armax: unrealistic for end users.
22:51:20 <armax> dougwig: nonsense
22:51:26 <armax> end users who?
22:51:33 <armax> my granpa?
22:51:37 <armax> come on!
22:51:38 <armax> :)
22:51:44 <dougwig> and this is why openstack sucks for public clouds.
22:51:58 <amotoki> i am not 100% sure this leads to incompatibility. this sounds a possible usecase and API consumer can know what rules are provisioned.
22:52:00 <kevinbenton> dougwig: aws is default closed, no?
22:52:01 <armax> that’s EC2 behavior too
22:52:08 <ihrachys> no, it sucks because we change its behaviour every second cycle. oh wait.
22:52:38 <dougwig> nah, ec2 walks you through the SG as part of launch, so it hits you in the face.
22:52:41 <amuller> amotoki: it would be discoverable but every cloud could have a different default
22:52:46 <kevinbenton> dougwig: so that has nothing to do with neutron
22:52:49 <kevinbenton> we've talked about this before
22:52:51 <armax> dougwig: and so is horizon
22:52:51 <amotoki> a questions is we need to ensure the default rules or we can say to check the default rules thru API.
22:52:52 <ihrachys> amotoki: existing apps could not retroactively know that neutron will decide to screw them.
22:52:54 <kevinbenton> it sounds like you want a horizon feature
22:53:08 <dougwig> kevinbenton: i want unicorns and cake, too.
22:53:26 <armax> ok, let’s assume that this is not going anywhere anytime soon
22:53:27 <njohnston> If people really want this, I think the FWaaS v2 spec covers this use case.
22:53:33 <armax> perhaps we can involve the nova folks just to stir the pot
22:53:40 <armax> let’s move on
22:53:50 <armax> bug 1596611
22:53:50 <openstack> bug 1596611 in neutron "[RFE] Create floating-ips with qos" [Wishlist,Triaged] https://launchpad.net/bugs/1596611 - Assigned to LiuYong (liu-yong8)
22:53:50 <kevinbenton> dougwig: but this would suck more for public clouds if we left it default open
22:54:07 <dougwig> kevinbenton: it's really hurt digital ocean a lot. not.
22:54:20 <ihrachys> that one, I don't believe it's achievable with the current state of traffic classification in neutron (which is non-existent)
22:54:32 <armax> ihrachys: way ahead of you
22:54:48 <armax> ihrachys: we do have a mechanism to postpone
22:54:56 <kevinbenton> dougwig: digital ocean beating amazon? :)
22:55:20 <ajo> I believe we can't tackle that yet
22:55:24 <ihrachys> then let's do it. I would mark TC effort as a dep for that RFE, but that's probably something not supported for bugs but for bps only. good riddance.
22:55:24 <dougwig> kevinbenton: bah, we pick and choose using AWS as our PRD, depending on our biases.
22:55:25 <armax> ihrachys: your comment sums it up pretty well
22:55:25 <ajo> eventually we will be able
22:55:32 <armax> bug 1603833
22:55:32 <openstack> bug 1603833 in neutron "If we need a host filter in neutron ?" [Wishlist,Triaged] https://launchpad.net/bugs/1603833
22:55:42 <armax> this one I think can be tackled by nova’s scheduler filter mechanism
22:55:49 <armax> anyone can comment?
22:55:50 <amuller> dougwig: it's a useful data point is all
22:55:54 <amuller> dougwig: non-binding
22:56:05 <dougwig> armax: does the nova scheduler have the neutron net when it runs?
22:56:17 <armax> dougwig: I suppose they must
22:56:35 <ajo> that's related to the nova generic resource pool integration
22:56:35 <ihrachys> armax: I am not 100% sure that's the only goal, but if it's about bandwidth oversubscribing, then I think it's a dup for another bug I mentioned there.
22:56:37 <armax> the godaddy guys developed the IP availability API for a similar use case
22:56:48 <ajo> and, also related to strict min bandwidth limit (when they talk about bandwidth)
22:57:08 <ajo> we have a QoS RFE for that, but we need to wait on nova to be ready before jumping in
22:57:12 <armax> ok, let’s continue the chat on the bug to further scope it
22:57:28 <armax> I suppose this is a first, have we ever managed to finish the entire list in one meeting?
22:57:53 <kevinbenton> yes
22:57:54 <carl_baldwin> I don't think we have
22:58:27 <armax> ok
22:58:34 <armax> let’s get 2 mins back
22:58:39 <armax> #endmeeting