16:59:32 <Kiall> #startmeeting Designate
16:59:33 <openstack> Meeting started Wed Dec 10 16:59:32 2014 UTC and is due to finish in 60 minutes.  The chair is Kiall. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:59:35 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:59:37 <openstack> The meeting name has been set to 'designate'
16:59:43 <Kiall> Heya - Who's about?
16:59:57 <timsim> o/
17:00:03 <mugsie> o/
17:00:21 <Kiall> Lot's of agenda items today, so we'll likely want to try be quick today :)
17:00:22 <Kiall> #link https://wiki.openstack.org/wiki/Meetings/Designate
17:00:24 <rjrjr_> o/
17:00:41 <betsy> o/
17:01:08 <Kiall> #topic Action Items from last week
17:01:17 <Kiall> all review new-agent spec - we'll discuss in the topic for this, so moving on :)
17:01:29 <Kiall> #topic Kilo Release Status (kiall - recurring)
17:01:32 <Kiall> #link https://launchpad.net/designate/+milestone/kilo-1
17:02:10 <Kiall> Very quickly, good progress, I think we'll want to file bugs+BPs for some of the outstanding issues (including the ones rjrjr_'s latest patchset fixes) so there tracked here..
17:02:26 <vinod1> relaod config files and tsigs - are we still targetting those?
17:02:33 <Kiall> rjrjr_: I'm hoping I can ask you to file those?
17:02:53 <betsy> I’d like to get the pool storage changes in
17:02:58 <Kiall> tsig yes - I think we need it for more than 1 pool, it's a simple enough thing to do though.
17:03:02 <rjrjr_> sure.
17:03:21 <Kiall> reload config - if it happens to land - great - it's "Low" priorioty so can be bumped if needs be
17:03:23 <Kiall> thanks rjrjr_:)
17:03:53 <Kiall> Anyway, I don't think theres much of note to discuss on the status this week. So moving on again
17:04:14 <Kiall> #topic Pools - Where are we? (kiall - recurring)
17:04:34 <Kiall> So, betsy has some changes, as does rjrjr_, and we have the various backends to get updated.
17:04:50 <Kiall> I'll talk to powerdns and the "Pools config definition issue (kiall)" in a sec - update from anyone else?
17:05:06 <timsim> Anything vinod1?
17:05:10 <rjrjr_> should have some unit tests for pool manager to push this evening.
17:05:22 <mugsie> cool
17:05:38 <vinod1> no - no updates from me
17:06:04 <Kiall> Okay... So, while transitioning the PowerDNS backend, I ran into an issue around how pools config is handled
17:06:30 <Kiall> Some backends (e.g. PowerDNS ;)) have a use of the configuration outside of the pool manager service  - for example - in database migrations
17:07:16 <Kiall> Right now, the the pool-manager service and pool backend are fairly linked together - and, I believe we'll need to split them apart a bit.
17:07:17 <rjrjr_> want me to move that functionality to central?
17:07:45 <Kiall> While working through trying to split them, I've run into 2 other issues with the way it works...
17:08:10 <Kiall> 1) Because the config is dynamic, we're blocking ourselves from being about to use sample config generation, and config reloading
17:08:45 <Kiall> 2) The structure seems like it might need to change, so backends will need to fan out changes (bind), while others need to keep in central (powerdns)
17:08:50 <Kiall> some backends*
17:09:04 <Kiall> grr - typo's all around
17:09:11 <Kiall> 1) Because the config is dynamic, we're blocking ourselves from being able to use sample config generation, and config reloading
17:09:17 <Kiall> 2) The structure seems like it might need to change, some backends will need to fan out changes (bind), while others need to keep in central (powerdns)
17:09:19 <Kiall> ^ fixed
17:09:38 <mugsie> Kiall: not in central
17:09:56 <Kiall> ah - not the service
17:10:01 <Kiall> central as in centralized
17:10:07 <mugsie> ;)
17:10:09 <Kiall> 1 DB, multiple servers.
17:10:20 <Kiall> vs bind - DB is files on disk and local to each server
17:10:54 <Kiall> So - As part of trying to split the config from the pool manager service, I've been trying to make sure we don't block ourselves from doing the 2 items in #1, and only just realized the #2 issue before my last meeting.
17:11:11 <Kiall> (Everyone following, not 100% sure I'm explaining this properly!)
17:11:21 <rjrjr_> i'm not following.
17:11:49 <Kiall> Okay - let me take 1 part at a time :)
17:11:50 <rjrjr_> what configuration data is needed for powerdns for example, that won't be used in pool manager, but used elsewhere.
17:12:16 <rjrjr_> do you have an example of the problem?
17:12:27 <Kiall> So - for PowerDNS, we have to use the same configuration options from both the pool manager service, and the designate-manage CLI while performing DB migrations
17:12:58 <rjrjr_> okay, that makes sense.
17:13:00 <Kiall> The way the config is instantiated today would require the CLI to spin up pool manager in order to get the config
17:13:10 <Kiall> Okay - Next piece, is:
17:13:52 <Kiall> In order to be not build this in a way that blocks sample config generation in the future, we need to be able to provide a method like this to oslo.config for option discovery:
17:14:14 <Kiall> https://github.com/openstack/designate/blob/master/designate/openstack/common/sslutils.py#L41
17:14:55 <Kiall> the list_ops() method needs to be aware of all the options, which is going to be difficult with the current config structure
17:15:24 <Kiall> Not a major thing, we just don't want to lock ourselves out of doing that since we're changing it anyway
17:15:28 <timsim> If it needs to be used outside of the PM, right?
17:15:55 <Kiall> timsim: yes - an example from Heat on where they get used is here: https://github.com/openstack/heat/blob/master/setup.cfg#L35
17:16:23 <Kiall> oslo.config's generater will find+load those, modules, run the method, and expect to get a full set of opts back after calling each of them
17:17:03 <timsim> Ok, I guess there couldn't be specific designate-manage for example config options? That doesn't really solve the bigger issue, but could you just duplicate config if you had to?
17:17:05 <Kiall> All clear so far?
17:17:17 <rjrjr_> yes.
17:17:22 <timsim> yep
17:17:26 <Kiall> timsim: yea, it's just something to keep in mind while we make the other necessary changes anyway
17:17:30 <timsim> For sure
17:17:47 <Kiall> Next piece was, the potential need to restructure the layout of the config.. Specifically, we have different sections in the pools config today:
17:18:02 <Kiall> [backend:bind9:*] <-- global defaults for all bind9 servers in the pool
17:18:13 <Kiall> [backend:bind9:<server-id>] <-- server specific overrides
17:18:39 <Kiall> That works great for bind9 - where each bind9 server needs to be contacted for each change (add/remove zone etc)
17:19:13 <Kiall> vs with powerDNS - where you want to submit the add/remove once as it get's pushed into a shared DB
17:19:45 <Kiall> And - The [backend:bind9:<server-id>]  sections are used to determine which DNS servers to poll for change status
17:19:53 <Kiall> (e.g. 90% of servers have a change, so mark it as ACTIVE)
17:20:15 <Kiall> End result is we have a conflict there between wanting to check all the servers, while pushing change to a single place..
17:20:41 <timsim> That makes sense, it seems like ideally there would be a single place to do *something* before reaching out to those servers
17:21:08 <Kiall> Not 100% sure if that's the extent of that issue, if it's exaggerated, or if there's more to it.. Only realized it shortly before my last meeting - which was back to back with this
17:21:20 <timsim> (I actually thought of an issue in our use case that suffers from a similar issue)
17:21:29 <rjrjr_> we can add an option to the global defaults section to handle the single versus multiple work
17:21:33 <Kiall> it also plays in timsim's agent proposal - where you might push to 2x bind masters, and pool the real servers for publication status
17:22:08 <Kiall> rjrjr_:  yea - My thinking is the set of servers to poll is a simple list of IP:PORT pairs in the main config
17:22:28 <rjrjr_> main config?
17:22:32 <Kiall> Followed by [backend:bind9:<server-id>] changing to [backend:bind9:<target-index>]
17:22:46 <Kiall> where the "target" is where changes get pushed to (probably needs a better name)
17:23:10 <timsim> So for PowerDNS a database, for a Master-Slave bind9, the Masters?
17:23:27 <Kiall> rjrjr_: a "servers = 1.2.3.4:53, 4.3.2.1:53" option inside the [backend:bind9:*]  section
17:23:43 <Kiall> (Also - Apologies about this being a unfiltered braindump ;))
17:23:56 <Kiall> timsim: yea, exactly.
17:24:11 <betsy> kiall: no worries :) We appreciate  your thoughts
17:24:49 <Kiall> for each target you would create a backend instance, for bind9, that 1 for every server you want to `rndc addzone` against, for powerdns it's every DB you want to write changes to (which could be 1 DB cluster, or a standalong DB on every powerDNS server)
17:25:04 <timsim> That makes sense to me. In most cases I would think that it isn't as simple as "apply changes to a and b, poll a and b"
17:25:58 <Kiall> timsim: Yes - Exactly, bind9 is "apply to serverA+serverB, poll serverA+serverB" - While PowerDNS is "apply to serverA, poll serverB+serverC"
17:26:04 <timsim> And then for each target, do you specify a list of servers to poll?
17:26:24 <timsim> Or is that global for the backend?
17:26:33 <mugsie> timsim: i would think the list of polling servers would be per backend
17:26:40 <mugsie> / pool manager
17:26:40 <Kiall> I *think* there unrelated to each other .. the "servers to poll" is per pool
17:26:56 <Kiall> rjrjr_: am I making sense? :)
17:26:57 <timsim> Fair enough.
17:27:21 <rjrjr_> yes, but i'm going to have to think about this.
17:27:39 <rjrjr_> i understand the problem.  i think i understand what you are proposing as a solution.
17:27:48 <timsim> There should probably be a writeup with some examples.
17:27:56 <Kiall> Yea, as I said, I discovered it about 1.5 hrs ago, and have been in meetings the last 1.2 hours ;)
17:28:08 <rjrjr_> but i'd like to give it some thought as there might be other more elegant solutions.
17:28:27 <timsim> All good. Maybe we circle around next week when folks have time to think about it and hopefully review a doc?
17:28:27 <Kiall> But - As it's semi related to the config issues I've been trying to solve for PowerDNS, I figuired figured I'd raise it now
17:28:50 <rjrjr_> we need to solve this in the next couple of days, IMHO.
17:29:00 <betsy> rjrjr_ +1
17:29:08 <rjrjr_> 18th is approaching fast.
17:29:12 <Kiall> rjrjr_:  this specific part doesn't actually block porting PowerDNS... It's the config tied to PM piece that is
17:29:28 <Kiall> That piece I'll have a review up for in the next few hours
17:29:57 <Kiall> then PowerDNS can be ported, just "shared DB" won't work
17:30:30 <Kiall> Okay - Any should we move on? 2 topics down, 3 to go, and half way through our time
17:30:45 <rjrjr_> let's move on.
17:30:47 <timsim> Sure
17:30:56 <Kiall> KK
17:31:01 <Kiall> #topic Icehouse Support Life (kiall)
17:31:37 <Kiall> So - zigo (from debian) has been asking if we can commit to supporting icehouse designate (security issues mainly) for Debian Jessie release + ~3yr
17:31:49 <Kiall> If the answer is no, he has to pull it from debian before the Jessie release..
17:32:10 <Kiall> Otherwise, he leaves it in and we commit to providing fixes for any security issues etc found
17:32:20 <Kiall> Thoughts?
17:32:53 * Kiall loves the silence
17:32:54 <rjrjr_> 3 years is a long time.
17:33:03 <timsim> It seems like something we could do without too much effort to me? Unless something awful comes up.
17:33:06 <mugsie> I would say provisionally yes. but - people may need buy in from management
17:33:10 <betsy> And it’s v1 api not v2
17:33:11 <Kiall> that's my thinking to ;) But - Bearing in mind it's just security issues...
17:33:30 <mugsie> but - security issues that could be .... interesting to fix ;)
17:33:37 <Kiall> (any bugs are running in production in HP Cloud.. So.. It works ..
17:33:37 <mugsie> especially if libs change etc
17:33:40 <rjrjr_> we'll be burdening future Designate developers with this. 8^)
17:33:47 <Kiall> ;)
17:34:04 <Kiall> The other projects have been posed the same Q BTW ;)
17:34:06 <mugsie> that said, thje benifit of being in debian by default is huge
17:34:20 <mugsie> (for people using the software)
17:34:25 <rjrjr_> did all the other teams say yes?
17:34:28 <timsim> If other projects are doing it, I feel like we should too.
17:34:32 <mugsie> rjrjr_: afaik yes
17:34:46 <betsy> Seems like it would be a good thing to do
17:34:46 <mugsie> unless Kiall has other info
17:35:00 <rjrjr_> or is this like an interrogation of several witnesses?  so and so said yes, so what do you say?
17:35:02 <mugsie> yeah. that is why I am edging towards yes
17:35:04 <rjrjr_> LOL
17:35:06 <Kiall> kinda..
17:35:18 <Kiall> <zigo> mugsie: A lot of organizations have agreed to provide security support for Icehouse for at least 3 years: Red Hat, Canonical, IBM, Mirantis, etc.
17:35:27 <vinod1> 2014-12-10T11:27:49  <mugsie> zigo: what are nova / neutron / cinder etc doing for support? Our releases are managed by the release team, who will tag Icehouse as EOL in May
17:35:27 <vinod1> 2014-12-10T11:28:20  <mugsie> how are other openstack projects dealing with it? (or are they?)
17:35:27 <vinod1> 2014-12-10T11:28:48  <zigo> mugsie: A lot of organizations have agreed to provide security support for Icehouse for at least 3 years: Red Hat, Canonical, IBM, Mirantis, etc.
17:35:27 <vinod1> 2014-12-10T11:29:03  <zigo> mugsie: So if it doesn't happen upstream, it will happen on downstream distributions.
17:35:27 <vinod1> 2014-12-10T11:29:12  <zigo> mugsie: Though this doesn't include Designate, which is why I have a problem.
17:35:45 <mugsie> yeah, but it was never called ouot which projects ;)
17:35:55 <Kiall> i.e. projects get asked, and the the answers have come in as "Yes" - Just it's not the project itself directly saying "yes"
17:35:56 <betsy> Any other incubated projects?
17:36:01 <rjrjr_> organizations doesn't = projects
17:36:07 <mugsie> yup
17:36:09 <Kiall> I think we're the only incubated projected packaged ;_
17:36:24 <vinod1> if we miss this release of debian - when is the next release that we can get into?
17:36:24 <Kiall> rjrjr_: yea, that's my "kinda" and why I pasted that particular quote :)
17:36:30 <mugsie> now, most of these orgs have thier own distros, so will be providing support abyway
17:36:36 <mugsie> fore the main projects
17:37:07 <timsim> Maybe one of those orgs could come in and help us if work were required of us?
17:37:33 <Kiall> Anyway - Here's my thinking, I'm tempted to say "Yes" as an individual...
17:37:33 <rjrjr_> i don't have a strong opinion.  so i'll go with whatever the team thinks.  seems like it can get messy but we have no way of knowing.
17:37:55 <mugsie> Kiall: yeah - that is my thinking
17:38:19 <mugsie> does anyone have any strong objections?
17:38:29 <timsim> No strong objection.
17:38:35 <betsy> nope
17:38:39 <Kiall> Okay - So if others want to throw their name in, or not, that's OK..
17:39:01 <rjrjr_> would it be the Designate team doing this versus individuals?
17:39:02 <Kiall> Let me know after the meet or whenever etc etc
17:39:31 <vinod1> when an issue is reported - how much time do we have to fix the issue?
17:39:32 <Kiall> rjrjr_: a commit needs to be made for it to stay, I don't believe it *needs* to be the project, or an org ..
17:39:47 <clarkb> keep in mind upstream infra will be killing py26 testing when juno is EOL'd upstream
17:39:55 <clarkb> not sure if that matters, but it may be useful info :)
17:39:56 <rjrjr_> might as well commit the team and our future children. 8^)
17:40:11 <Kiall> clarkb: yep, I know :)
17:40:21 <mugsie> clarkb: after Icehouse is eol'd upstream is lost us anyway isnt it?
17:40:38 <Kiall> vinod1: I don't think there's hard deadlines etc.. Security fixes get reported, embargoed, and fixed expect for when someone reports it on a blog or pastebin;)
17:40:52 <timsim> It seems logical that the team (whoever that may be) should fix those issues should they crop up.
17:41:03 <clarkb> mugsie: I think that if there was a group of people sainy we are going to support this we wouldn't EOL it for that group. But I may be mistaken
17:41:21 <clarkb> mugsie: typical response to longer support periods has been "please show up and we will help you"
17:41:30 <mugsie> clarkb: ok... that might be something for us to look at as well Kiall
17:41:32 <Kiall> timsim: yep, and to a certain extend I think that will happen regardless of who commits, bearing in mind OpenStack as a group no longer supports it
17:41:36 <mugsie> clarkb: thanks
17:41:39 <Kiall> (so CI is gone etc)
17:41:53 <mugsie> Kiall: read scrollback ;)
17:42:08 <clarkb> mugsie: the reason we have strongly EOLd things is no one works on them and we want to avoid the appearance that things are still supported
17:42:10 <Kiall> ;)
17:42:11 <Kiall> Heh
17:42:19 <timsim> So I think we're agreed then?
17:42:45 <Kiall> clarkb: right, and this wouldn't be "actively maintained" as against providing distros with the patches they need to slip into their packaging
17:43:07 <Kiall> i.e. I still think infra would EOL py26 testing, and eol the branch
17:43:25 <mugsie> We can ask, and double check after this meeting though
17:43:29 <Kiall> and we'd work outside of infra when/if an issue comes in
17:43:38 <clarkb> definitely eol py26 testing (it needs to go away python eold that version forever ago)
17:43:47 <clarkb> Kiall: gotcha
17:43:57 <Kiall> Anyway - Time is ticking on, 15 mins and I'd like to move onto TIm's topic.
17:44:02 <Kiall> thanks for the info clarkb :)
17:44:33 <Kiall> Any other Q's etc on this, let's sync up outside the meet. For now - I'll tell zigo that at a min, myself and mugsie can commit…
17:44:46 <Kiall> Going to skip the next topic and move to tims..
17:44:50 <Kiall> #topic Agent Spec/Code Discussion (Tim)
17:44:56 <Kiall> timsim: You're up
17:44:57 <timsim> It seems like few people had time to review the spec (https://review.openstack.org/#/c/131530/3) so it seems like a bit of a push to get it in K1. If anyone can take a look at soon it that'd be great :)
17:45:01 <timsim> I do have some WIP code for it, and if people aren't opposed to me putting it up I will (5 patchsets, 1k lines total :/). It is a bit premature with so little feedback on the spec. If there are issues with the spec, having the code out there seems semi-pointless. Thoughts?
17:45:30 <mugsie> I think we can safely say K-1 will not ship with support for the old backends
17:45:52 <mugsie> and we can target the agent for early in K-2
17:46:04 <Kiall> timsim: I've read it, and believe it's fine, the proposal we made last week about support for IPA style backends would be an add-on to this  - just needing some minor allowances for it
17:46:19 <mugsie> not opposed to the code going up either timsim :)
17:46:21 <betsy> timsim: I don’t see any problem with putting out the code even without the spec approved
17:46:35 <rjrjr_> agred
17:46:38 <rjrjr_> agreed
17:46:43 <Kiall> mugsie: I think so, given time, and the other bugs blocked on switching central 100% to pools, I agree we need to break in k1 and fix in k2
17:47:05 <Kiall> betsy / timsim: I think it's even better if you do, since it can give a clearer understanding in some ways :)
17:47:23 <timsim> Alright, it's quite young.
17:47:32 <timsim> And it might be called "slappy" everywhere :x
17:47:33 <Kiall> (As a general comment - Bearing in mind people may totally disagree and make you or someone else start over ;))
17:47:52 <timsim> Yeah, fair enough.
17:48:13 <Kiall> So - Making this decision impacts IPA support, and InfoBlox support in K1 (they have a driver up for review)
17:48:33 <Kiall> I saw johnbelamaric join a towards the beginning of the meet
17:48:38 <johnbelamaric> yep, i am here
17:48:46 <johnbelamaric> got here a bit late
17:49:21 <timsim> I'm going to have to re-read the IPA stuff, I'm a little fuzzy on that.
17:49:22 <Kiall> Okay - So, I'm guessing you guys are against breaking in K1 and fixing in K2? :)
17:49:26 <johnbelamaric> Kiall: i pushed an update addressing your comments
17:49:42 <Kiall> #link https://wiki.openstack.org/wiki/Kilo_Release_Schedule
17:49:46 <Kiall> ^ release schedule for K1 and K2
17:49:51 <johnbelamaric> well, yes, but it's not urgent for us at this time, as long as it gets fixed in K2
17:50:03 <johnbelamaric> because no customers are going to pick it up on the active branch
17:50:30 <Kiall> ++ The commit would need to be a fix in K2
17:50:44 <Kiall> As a team, how do we feel about being able to make that commit?
17:50:54 <mugsie> I think we can (and need) to do it
17:51:06 <timsim> By Feb 5, I think that's reasonable.
17:51:08 <rjrjr_> agreed
17:51:23 <vinod1> +1
17:51:44 <johnbelamaric> let me know if our team can help - we don't have a lot of bandwidth but could help out some if needed
17:51:58 <Kiall> It's also worth noting for any future cases like this - If we were an integrated, rather than incubated project.. This decision wouldn't be up to us.. It would be a hard "no, you can't do that"
17:52:47 <timsim> Maybe at that point, we would be allowed a feature-branch :P
17:52:50 <Kiall> johnbelamaric: excellent, I'm betting we'll need a little extra - especially around reliability fixes/testing the fix mid - late K2
17:53:05 <Kiall> timsim: check out the Neutron feature branches, and ask yourself if you want that rebase ;)
17:53:16 <johnbelamaric> ok
17:53:46 <Kiall> timsim: https://github.com/openstack/neutron/compare/feature/lbaasv2...master
17:53:50 <Kiall> "This page is taking way too long to load."
17:53:57 <Kiall> That's how fun it's gonna be
17:54:05 <mugsie> Kiall: not nesisaraly
17:54:08 <timsim> :P Hopefully ours wouldn't be that bad.
17:54:13 <rjrjr_> nah, we are better than that. 8^)
17:54:18 <mugsie> thats just bad management by the lbaas teams ;)
17:54:19 <Kiall> ;) Anyway
17:54:24 <timsim> Anyway, do we want to circle back to your other item Kiall, or do open discussion?
17:54:39 <Kiall> So - Any objections to the break fix before we move on?
17:54:52 <timsim> Nope.
17:54:52 <mugsie> nope
17:55:00 <vinod1> nope
17:55:06 <betsy> zno
17:55:09 <johnbelamaric> no
17:55:10 <betsy> No
17:55:14 <rjrjr_> no
17:55:33 <Kiall> Okay - That a full house of "no"'s for attendees.
17:55:51 <Kiall> The skipped topic was:
17:55:53 <Kiall> #topic Periodic Sprints on Docs/Bugs/etc/etc (kiall)
17:56:41 <Kiall> timsim made a comment the other day that gave me an idea, we should organize a monthly or so half day (so we fit TZs in) sprint on things like docs/bug fixing/bug triage/planning/various other topics etc etc
17:56:53 <timsim> Personally I think this would be super cool.
17:56:58 <mugsie> yup - +2[3~
17:57:06 <mugsie> +2*
17:57:07 <timsim> And a good precedent to set as the project grows.
17:57:25 <vinod1> +2
17:57:29 <betsy> +1
17:57:36 <rjrjr_> what is the venue?  chat?
17:57:48 <Kiall> Yep, I think it may even help newcomers join the project too, if any turn up, they have the whole group on hand for that half day and some clear + set goals for the day :)
17:57:56 <betsy> Google hangout?
17:57:59 <Kiall> rjrjr_: I was thinking we could hold in Paris
17:58:04 <Kiall> (kidding ;))
17:58:06 <rjrjr_> i'm in.
17:58:07 <timsim> Kiall +10000
17:58:10 <Kiall> Anyway - Other teams use IRC for it
17:58:12 <betsy> kiall: +1 :)
17:58:21 <Kiall> 2 mins -
17:58:44 <Kiall> Think it about, get mgmt buy in if you like the idea, and come up with ideas for the sprints. moving on .. 2 mins left ;)
17:58:47 <Kiall> #topic Open Discussion
17:58:52 <Kiall> Any other topics?
17:59:09 <mugsie> i am good
17:59:13 <timsim> I'm good
17:59:23 <vinod1> i'm good too
17:59:28 <rjrjr_> just a statement, we demo'd the Horizon plugin on Monday.  i'll have followup discussions about it later.
17:59:36 <rjrjr_> i'm good.
17:59:37 <Kiall> rjrjr_: cool :)
17:59:39 <betsy> Nothing from me
17:59:46 <Kiall> Thanks all :)
17:59:46 <johnbelamaric> question - will you have a chance to review the infoblox backend, or do we wait for k2
17:59:47 <mugsie> rjrjr_: cool - just ping us with anything you guys have
18:00:04 <mugsie> johnbelamaric: we should review it anyway
18:00:08 <johnbelamaric> thanks!
18:00:10 <mugsie> but might not merge it
18:00:13 <johnbelamaric> ok
18:00:14 <Kiall> johnbelamaric: Good Q - Review Yes, merge, probably not straight away - concentrating on K1 right now
18:00:24 <johnbelamaric> thanks, works for me.
18:00:39 <Kiall> Okay, thanks all.. Trove will start beating me if we don't get out ;')
18:00:43 <Kiall> #endmeeting