16:59:32 #startmeeting Designate 16:59:33 Meeting started Wed Dec 10 16:59:32 2014 UTC and is due to finish in 60 minutes. The chair is Kiall. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:59:35 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:59:37 The meeting name has been set to 'designate' 16:59:43 Heya - Who's about? 16:59:57 o/ 17:00:03 o/ 17:00:21 Lot's of agenda items today, so we'll likely want to try be quick today :) 17:00:22 #link https://wiki.openstack.org/wiki/Meetings/Designate 17:00:24 o/ 17:00:41 o/ 17:01:08 #topic Action Items from last week 17:01:17 all review new-agent spec - we'll discuss in the topic for this, so moving on :) 17:01:29 #topic Kilo Release Status (kiall - recurring) 17:01:32 #link https://launchpad.net/designate/+milestone/kilo-1 17:02:10 Very quickly, good progress, I think we'll want to file bugs+BPs for some of the outstanding issues (including the ones rjrjr_'s latest patchset fixes) so there tracked here.. 17:02:26 relaod config files and tsigs - are we still targetting those? 17:02:33 rjrjr_: I'm hoping I can ask you to file those? 17:02:53 I’d like to get the pool storage changes in 17:02:58 tsig yes - I think we need it for more than 1 pool, it's a simple enough thing to do though. 17:03:02 sure. 17:03:21 reload config - if it happens to land - great - it's "Low" priorioty so can be bumped if needs be 17:03:23 thanks rjrjr_:) 17:03:53 Anyway, I don't think theres much of note to discuss on the status this week. So moving on again 17:04:14 #topic Pools - Where are we? (kiall - recurring) 17:04:34 So, betsy has some changes, as does rjrjr_, and we have the various backends to get updated. 17:04:50 I'll talk to powerdns and the "Pools config definition issue (kiall)" in a sec - update from anyone else? 17:05:06 Anything vinod1? 17:05:10 should have some unit tests for pool manager to push this evening. 17:05:22 cool 17:05:38 no - no updates from me 17:06:04 Okay... So, while transitioning the PowerDNS backend, I ran into an issue around how pools config is handled 17:06:30 Some backends (e.g. PowerDNS ;)) have a use of the configuration outside of the pool manager service - for example - in database migrations 17:07:16 Right now, the the pool-manager service and pool backend are fairly linked together - and, I believe we'll need to split them apart a bit. 17:07:17 want me to move that functionality to central? 17:07:45 While working through trying to split them, I've run into 2 other issues with the way it works... 17:08:10 1) Because the config is dynamic, we're blocking ourselves from being about to use sample config generation, and config reloading 17:08:45 2) The structure seems like it might need to change, so backends will need to fan out changes (bind), while others need to keep in central (powerdns) 17:08:50 some backends* 17:09:04 grr - typo's all around 17:09:11 1) Because the config is dynamic, we're blocking ourselves from being able to use sample config generation, and config reloading 17:09:17 2) The structure seems like it might need to change, some backends will need to fan out changes (bind), while others need to keep in central (powerdns) 17:09:19 ^ fixed 17:09:38 Kiall: not in central 17:09:56 ah - not the service 17:10:01 central as in centralized 17:10:07 ;) 17:10:09 1 DB, multiple servers. 17:10:20 vs bind - DB is files on disk and local to each server 17:10:54 So - As part of trying to split the config from the pool manager service, I've been trying to make sure we don't block ourselves from doing the 2 items in #1, and only just realized the #2 issue before my last meeting. 17:11:11 (Everyone following, not 100% sure I'm explaining this properly!) 17:11:21 i'm not following. 17:11:49 Okay - let me take 1 part at a time :) 17:11:50 what configuration data is needed for powerdns for example, that won't be used in pool manager, but used elsewhere. 17:12:16 do you have an example of the problem? 17:12:27 So - for PowerDNS, we have to use the same configuration options from both the pool manager service, and the designate-manage CLI while performing DB migrations 17:12:58 okay, that makes sense. 17:13:00 The way the config is instantiated today would require the CLI to spin up pool manager in order to get the config 17:13:10 Okay - Next piece, is: 17:13:52 In order to be not build this in a way that blocks sample config generation in the future, we need to be able to provide a method like this to oslo.config for option discovery: 17:14:14 https://github.com/openstack/designate/blob/master/designate/openstack/common/sslutils.py#L41 17:14:55 the list_ops() method needs to be aware of all the options, which is going to be difficult with the current config structure 17:15:24 Not a major thing, we just don't want to lock ourselves out of doing that since we're changing it anyway 17:15:28 If it needs to be used outside of the PM, right? 17:15:55 timsim: yes - an example from Heat on where they get used is here: https://github.com/openstack/heat/blob/master/setup.cfg#L35 17:16:23 oslo.config's generater will find+load those, modules, run the method, and expect to get a full set of opts back after calling each of them 17:17:03 Ok, I guess there couldn't be specific designate-manage for example config options? That doesn't really solve the bigger issue, but could you just duplicate config if you had to? 17:17:05 All clear so far? 17:17:17 yes. 17:17:22 yep 17:17:26 timsim: yea, it's just something to keep in mind while we make the other necessary changes anyway 17:17:30 For sure 17:17:47 Next piece was, the potential need to restructure the layout of the config.. Specifically, we have different sections in the pools config today: 17:18:02 [backend:bind9:*] <-- global defaults for all bind9 servers in the pool 17:18:13 [backend:bind9:] <-- server specific overrides 17:18:39 That works great for bind9 - where each bind9 server needs to be contacted for each change (add/remove zone etc) 17:19:13 vs with powerDNS - where you want to submit the add/remove once as it get's pushed into a shared DB 17:19:45 And - The [backend:bind9:] sections are used to determine which DNS servers to poll for change status 17:19:53 (e.g. 90% of servers have a change, so mark it as ACTIVE) 17:20:15 End result is we have a conflict there between wanting to check all the servers, while pushing change to a single place.. 17:20:41 That makes sense, it seems like ideally there would be a single place to do *something* before reaching out to those servers 17:21:08 Not 100% sure if that's the extent of that issue, if it's exaggerated, or if there's more to it.. Only realized it shortly before my last meeting - which was back to back with this 17:21:20 (I actually thought of an issue in our use case that suffers from a similar issue) 17:21:29 we can add an option to the global defaults section to handle the single versus multiple work 17:21:33 it also plays in timsim's agent proposal - where you might push to 2x bind masters, and pool the real servers for publication status 17:22:08 rjrjr_: yea - My thinking is the set of servers to poll is a simple list of IP:PORT pairs in the main config 17:22:28 main config? 17:22:32 Followed by [backend:bind9:] changing to [backend:bind9:] 17:22:46 where the "target" is where changes get pushed to (probably needs a better name) 17:23:10 So for PowerDNS a database, for a Master-Slave bind9, the Masters? 17:23:27 rjrjr_: a "servers = 1.2.3.4:53, 4.3.2.1:53" option inside the [backend:bind9:*] section 17:23:43 (Also - Apologies about this being a unfiltered braindump ;)) 17:23:56 timsim: yea, exactly. 17:24:11 kiall: no worries :) We appreciate your thoughts 17:24:49 for each target you would create a backend instance, for bind9, that 1 for every server you want to `rndc addzone` against, for powerdns it's every DB you want to write changes to (which could be 1 DB cluster, or a standalong DB on every powerDNS server) 17:25:04 That makes sense to me. In most cases I would think that it isn't as simple as "apply changes to a and b, poll a and b" 17:25:58 timsim: Yes - Exactly, bind9 is "apply to serverA+serverB, poll serverA+serverB" - While PowerDNS is "apply to serverA, poll serverB+serverC" 17:26:04 And then for each target, do you specify a list of servers to poll? 17:26:24 Or is that global for the backend? 17:26:33 timsim: i would think the list of polling servers would be per backend 17:26:40 / pool manager 17:26:40 I *think* there unrelated to each other .. the "servers to poll" is per pool 17:26:56 rjrjr_: am I making sense? :) 17:26:57 Fair enough. 17:27:21 yes, but i'm going to have to think about this. 17:27:39 i understand the problem. i think i understand what you are proposing as a solution. 17:27:48 There should probably be a writeup with some examples. 17:27:56 Yea, as I said, I discovered it about 1.5 hrs ago, and have been in meetings the last 1.2 hours ;) 17:28:08 but i'd like to give it some thought as there might be other more elegant solutions. 17:28:27 All good. Maybe we circle around next week when folks have time to think about it and hopefully review a doc? 17:28:27 But - As it's semi related to the config issues I've been trying to solve for PowerDNS, I figuired figured I'd raise it now 17:28:50 we need to solve this in the next couple of days, IMHO. 17:29:00 rjrjr_ +1 17:29:08 18th is approaching fast. 17:29:12 rjrjr_: this specific part doesn't actually block porting PowerDNS... It's the config tied to PM piece that is 17:29:28 That piece I'll have a review up for in the next few hours 17:29:57 then PowerDNS can be ported, just "shared DB" won't work 17:30:30 Okay - Any should we move on? 2 topics down, 3 to go, and half way through our time 17:30:45 let's move on. 17:30:47 Sure 17:30:56 KK 17:31:01 #topic Icehouse Support Life (kiall) 17:31:37 So - zigo (from debian) has been asking if we can commit to supporting icehouse designate (security issues mainly) for Debian Jessie release + ~3yr 17:31:49 If the answer is no, he has to pull it from debian before the Jessie release.. 17:32:10 Otherwise, he leaves it in and we commit to providing fixes for any security issues etc found 17:32:20 Thoughts? 17:32:53 * Kiall loves the silence 17:32:54 3 years is a long time. 17:33:03 It seems like something we could do without too much effort to me? Unless something awful comes up. 17:33:06 I would say provisionally yes. but - people may need buy in from management 17:33:10 And it’s v1 api not v2 17:33:11 that's my thinking to ;) But - Bearing in mind it's just security issues... 17:33:30 but - security issues that could be .... interesting to fix ;) 17:33:37 (any bugs are running in production in HP Cloud.. So.. It works .. 17:33:37 especially if libs change etc 17:33:40 we'll be burdening future Designate developers with this. 8^) 17:33:47 ;) 17:34:04 The other projects have been posed the same Q BTW ;) 17:34:06 that said, thje benifit of being in debian by default is huge 17:34:20 (for people using the software) 17:34:25 did all the other teams say yes? 17:34:28 If other projects are doing it, I feel like we should too. 17:34:32 rjrjr_: afaik yes 17:34:46 Seems like it would be a good thing to do 17:34:46 unless Kiall has other info 17:35:00 or is this like an interrogation of several witnesses? so and so said yes, so what do you say? 17:35:02 yeah. that is why I am edging towards yes 17:35:04 LOL 17:35:06 kinda.. 17:35:18 mugsie: A lot of organizations have agreed to provide security support for Icehouse for at least 3 years: Red Hat, Canonical, IBM, Mirantis, etc. 17:35:27 2014-12-10T11:27:49 zigo: what are nova / neutron / cinder etc doing for support? Our releases are managed by the release team, who will tag Icehouse as EOL in May 17:35:27 2014-12-10T11:28:20 how are other openstack projects dealing with it? (or are they?) 17:35:27 2014-12-10T11:28:48 mugsie: A lot of organizations have agreed to provide security support for Icehouse for at least 3 years: Red Hat, Canonical, IBM, Mirantis, etc. 17:35:27 2014-12-10T11:29:03 mugsie: So if it doesn't happen upstream, it will happen on downstream distributions. 17:35:27 2014-12-10T11:29:12 mugsie: Though this doesn't include Designate, which is why I have a problem. 17:35:45 yeah, but it was never called ouot which projects ;) 17:35:55 i.e. projects get asked, and the the answers have come in as "Yes" - Just it's not the project itself directly saying "yes" 17:35:56 Any other incubated projects? 17:36:01 organizations doesn't = projects 17:36:07 yup 17:36:09 I think we're the only incubated projected packaged ;_ 17:36:24 if we miss this release of debian - when is the next release that we can get into? 17:36:24 rjrjr_: yea, that's my "kinda" and why I pasted that particular quote :) 17:36:30 now, most of these orgs have thier own distros, so will be providing support abyway 17:36:36 fore the main projects 17:37:07 Maybe one of those orgs could come in and help us if work were required of us? 17:37:33 Anyway - Here's my thinking, I'm tempted to say "Yes" as an individual... 17:37:33 i don't have a strong opinion. so i'll go with whatever the team thinks. seems like it can get messy but we have no way of knowing. 17:37:55 Kiall: yeah - that is my thinking 17:38:19 does anyone have any strong objections? 17:38:29 No strong objection. 17:38:35 nope 17:38:39 Okay - So if others want to throw their name in, or not, that's OK.. 17:39:01 would it be the Designate team doing this versus individuals? 17:39:02 Let me know after the meet or whenever etc etc 17:39:31 when an issue is reported - how much time do we have to fix the issue? 17:39:32 rjrjr_: a commit needs to be made for it to stay, I don't believe it *needs* to be the project, or an org .. 17:39:47 keep in mind upstream infra will be killing py26 testing when juno is EOL'd upstream 17:39:55 not sure if that matters, but it may be useful info :) 17:39:56 might as well commit the team and our future children. 8^) 17:40:11 clarkb: yep, I know :) 17:40:21 clarkb: after Icehouse is eol'd upstream is lost us anyway isnt it? 17:40:38 vinod1: I don't think there's hard deadlines etc.. Security fixes get reported, embargoed, and fixed expect for when someone reports it on a blog or pastebin;) 17:40:52 It seems logical that the team (whoever that may be) should fix those issues should they crop up. 17:41:03 mugsie: I think that if there was a group of people sainy we are going to support this we wouldn't EOL it for that group. But I may be mistaken 17:41:21 mugsie: typical response to longer support periods has been "please show up and we will help you" 17:41:30 clarkb: ok... that might be something for us to look at as well Kiall 17:41:32 timsim: yep, and to a certain extend I think that will happen regardless of who commits, bearing in mind OpenStack as a group no longer supports it 17:41:36 clarkb: thanks 17:41:39 (so CI is gone etc) 17:41:53 Kiall: read scrollback ;) 17:42:08 mugsie: the reason we have strongly EOLd things is no one works on them and we want to avoid the appearance that things are still supported 17:42:10 ;) 17:42:11 Heh 17:42:19 So I think we're agreed then? 17:42:45 clarkb: right, and this wouldn't be "actively maintained" as against providing distros with the patches they need to slip into their packaging 17:43:07 i.e. I still think infra would EOL py26 testing, and eol the branch 17:43:25 We can ask, and double check after this meeting though 17:43:29 and we'd work outside of infra when/if an issue comes in 17:43:38 definitely eol py26 testing (it needs to go away python eold that version forever ago) 17:43:47 Kiall: gotcha 17:43:57 Anyway - Time is ticking on, 15 mins and I'd like to move onto TIm's topic. 17:44:02 thanks for the info clarkb :) 17:44:33 Any other Q's etc on this, let's sync up outside the meet. For now - I'll tell zigo that at a min, myself and mugsie can commit… 17:44:46 Going to skip the next topic and move to tims.. 17:44:50 #topic Agent Spec/Code Discussion (Tim) 17:44:56 timsim: You're up 17:44:57 It seems like few people had time to review the spec (https://review.openstack.org/#/c/131530/3) so it seems like a bit of a push to get it in K1. If anyone can take a look at soon it that'd be great :) 17:45:01 I do have some WIP code for it, and if people aren't opposed to me putting it up I will (5 patchsets, 1k lines total :/). It is a bit premature with so little feedback on the spec. If there are issues with the spec, having the code out there seems semi-pointless. Thoughts? 17:45:30 I think we can safely say K-1 will not ship with support for the old backends 17:45:52 and we can target the agent for early in K-2 17:46:04 timsim: I've read it, and believe it's fine, the proposal we made last week about support for IPA style backends would be an add-on to this - just needing some minor allowances for it 17:46:19 not opposed to the code going up either timsim :) 17:46:21 timsim: I don’t see any problem with putting out the code even without the spec approved 17:46:35 agred 17:46:38 agreed 17:46:43 mugsie: I think so, given time, and the other bugs blocked on switching central 100% to pools, I agree we need to break in k1 and fix in k2 17:47:05 betsy / timsim: I think it's even better if you do, since it can give a clearer understanding in some ways :) 17:47:23 Alright, it's quite young. 17:47:32 And it might be called "slappy" everywhere :x 17:47:33 (As a general comment - Bearing in mind people may totally disagree and make you or someone else start over ;)) 17:47:52 Yeah, fair enough. 17:48:13 So - Making this decision impacts IPA support, and InfoBlox support in K1 (they have a driver up for review) 17:48:33 I saw johnbelamaric join a towards the beginning of the meet 17:48:38 yep, i am here 17:48:46 got here a bit late 17:49:21 I'm going to have to re-read the IPA stuff, I'm a little fuzzy on that. 17:49:22 Okay - So, I'm guessing you guys are against breaking in K1 and fixing in K2? :) 17:49:26 Kiall: i pushed an update addressing your comments 17:49:42 #link https://wiki.openstack.org/wiki/Kilo_Release_Schedule 17:49:46 ^ release schedule for K1 and K2 17:49:51 well, yes, but it's not urgent for us at this time, as long as it gets fixed in K2 17:50:03 because no customers are going to pick it up on the active branch 17:50:30 ++ The commit would need to be a fix in K2 17:50:44 As a team, how do we feel about being able to make that commit? 17:50:54 I think we can (and need) to do it 17:51:06 By Feb 5, I think that's reasonable. 17:51:08 agreed 17:51:23 +1 17:51:44 let me know if our team can help - we don't have a lot of bandwidth but could help out some if needed 17:51:58 It's also worth noting for any future cases like this - If we were an integrated, rather than incubated project.. This decision wouldn't be up to us.. It would be a hard "no, you can't do that" 17:52:47 Maybe at that point, we would be allowed a feature-branch :P 17:52:50 johnbelamaric: excellent, I'm betting we'll need a little extra - especially around reliability fixes/testing the fix mid - late K2 17:53:05 timsim: check out the Neutron feature branches, and ask yourself if you want that rebase ;) 17:53:16 ok 17:53:46 timsim: https://github.com/openstack/neutron/compare/feature/lbaasv2...master 17:53:50 "This page is taking way too long to load." 17:53:57 That's how fun it's gonna be 17:54:05 Kiall: not nesisaraly 17:54:08 :P Hopefully ours wouldn't be that bad. 17:54:13 nah, we are better than that. 8^) 17:54:18 thats just bad management by the lbaas teams ;) 17:54:19 ;) Anyway 17:54:24 Anyway, do we want to circle back to your other item Kiall, or do open discussion? 17:54:39 So - Any objections to the break fix before we move on? 17:54:52 Nope. 17:54:52 nope 17:55:00 nope 17:55:06 zno 17:55:09 no 17:55:10 No 17:55:14 no 17:55:33 Okay - That a full house of "no"'s for attendees. 17:55:51 The skipped topic was: 17:55:53 #topic Periodic Sprints on Docs/Bugs/etc/etc (kiall) 17:56:41 timsim made a comment the other day that gave me an idea, we should organize a monthly or so half day (so we fit TZs in) sprint on things like docs/bug fixing/bug triage/planning/various other topics etc etc 17:56:53 Personally I think this would be super cool. 17:56:58 yup - +2[3~ 17:57:06 +2* 17:57:07 And a good precedent to set as the project grows. 17:57:25 +2 17:57:29 +1 17:57:36 what is the venue? chat? 17:57:48 Yep, I think it may even help newcomers join the project too, if any turn up, they have the whole group on hand for that half day and some clear + set goals for the day :) 17:57:56 Google hangout? 17:57:59 rjrjr_: I was thinking we could hold in Paris 17:58:04 (kidding ;)) 17:58:06 i'm in. 17:58:07 Kiall +10000 17:58:10 Anyway - Other teams use IRC for it 17:58:12 kiall: +1 :) 17:58:21 2 mins - 17:58:44 Think it about, get mgmt buy in if you like the idea, and come up with ideas for the sprints. moving on .. 2 mins left ;) 17:58:47 #topic Open Discussion 17:58:52 Any other topics? 17:59:09 i am good 17:59:13 I'm good 17:59:23 i'm good too 17:59:28 just a statement, we demo'd the Horizon plugin on Monday. i'll have followup discussions about it later. 17:59:36 i'm good. 17:59:37 rjrjr_: cool :) 17:59:39 Nothing from me 17:59:46 Thanks all :) 17:59:46 question - will you have a chance to review the infoblox backend, or do we wait for k2 17:59:47 rjrjr_: cool - just ping us with anything you guys have 18:00:04 johnbelamaric: we should review it anyway 18:00:08 thanks! 18:00:10 but might not merge it 18:00:13 ok 18:00:14 johnbelamaric: Good Q - Review Yes, merge, probably not straight away - concentrating on K1 right now 18:00:24 thanks, works for me. 18:00:39 Okay, thanks all.. Trove will start beating me if we don't get out ;') 18:00:43 #endmeeting