20:01:19 #startmeeting tc 20:01:20 Meeting started Tue Dec 13 20:01:19 2016 UTC and is due to finish in 60 minutes. The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:01:21 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:01:23 The meeting name has been set to 'tc' 20:01:26 * edleafe hides in the back 20:01:29 Hi everyone! 20:01:31 o/ 20:01:32 o/ 20:01:33 Our agenda for today: 20:01:35 #link https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee 20:01:36 o/ 20:01:41 (remember to use #info #idea and #link liberally to make for a more readable summary) 20:01:48 dolphm: around ? 20:02:14 * ttx starts with the second topic to give time for dolphm to maybe join us 20:02:20 #topic Some early contribution stats for Ocata (ttx) 20:02:30 I ran some stats last week to look into early Ocata cycle contributions 20:02:40 to try to see if we had projects that were very affected by recent changes at various companies 20:02:53 It's still pretty early (not much data), but there are early signs 20:03:01 * stevemar is eager to hear about this data 20:03:04 ttx: yes 20:03:06 Figured I should share 20:03:13 I compared changes merged during first 5 weeks of Mitaka (including Thanksgiving) with first 5 weeks of Ocata 20:03:25 to have comparable timeframes in terms of holidays and such 20:03:31 And tried to ponder the results with Mitaka -> Newton trends, to isolate Ocata data 20:03:41 I only looked up projects that are used in 10%+ of installs (according to user survey) 20:03:51 Most affected project is clearly Designate, with a -63% drop between first weeks of Ocata and first weeks of Mitaka 20:03:58 * Rockyg slides in next to edleafe and waves hi 20:03:59 (while activity in Newton had increased +35% compared to Mitaka !) 20:04:16 Other Ocata visibly-affected projects included: 20:04:19 * smcginnis notes the goals of the shorter cycle may also have some impact there. 20:04:33 smcginnis: yes, that is a good caveat indeed 20:04:35 Nova (-6% while activity in Newton was +30% compared to Mitaka) 20:04:40 Cinder (-25% while activity in Newton was +7% compared to Mitaka) 20:04:45 * edleafe waves back 20:04:47 Rally (-48% while Newton was only -9% compared to Mitaka) 20:04:54 Keystone (-31% while activity in Newton was only -3% compared to Mitaka) 20:05:01 Those ^ mostly due to attrition in changes proposed, not really to core bandwidth 20:05:13 so might point to what smcginnis just said 20:05:17 Sahara (-44% while Newton was only -7% compared to Mitaka) 20:05:23 Infrastructure (-15% while activity in Newton was +20% compared to Mitaka) 20:05:27 Telemetry (-41% while Newton was down -12% compared to Mitaka) 20:05:31 Those ^ mostly due to attrition in core reviews, since changes were still proposed 20:05:43 Some other projects have a ~50% drop but the reduced activity started in Newton, so not exclusively an Ocata artifact: 20:05:48 Glance (-58% but Newton was already down -32%) 20:05:53 Heat (-52% but Newton was already -20%) 20:05:58 Docs (-48% but Newton was already -25%) 20:06:07 Other projects are doing relatively well, or are even in good shape (Oslo, Manila) 20:06:15 the docs one seems a massive concern to me 20:06:26 johnthetubaguy: probably part of a larger trend 20:06:27 johnthetubaguy: well, glance too, tbh 20:06:32 johnthetubaguy: they all seem like massive concerns to me :) 20:06:37 I'll refresh those stats once we have more data, and keep you posted. 20:06:38 ++ stevemar 20:06:42 ttx: did you count nova and nova-specs together? Or just nova? 20:06:43 stevemar ++ 20:06:45 the core team keeps shrinking and so do contributions 20:06:49 edleafe: just nova 20:06:50 stevemar: my point exactly 20:06:58 ttx: do you have the latest numbers on core reviewer team attrition? 20:07:07 ttx: so I forgot the doc split out... ignore me a little 20:07:25 dhellmann: I just have our notes 20:07:35 ok, those should be up to date afaik 20:08:06 raw data at https://etherpad.openstack.org/p/iFoG829Xig for those playing at home 20:08:32 o/ 20:08:46 it would be interesting to match up the review stats changes with the team stats changes to see if there's any correlation 20:08:56 as flaper87 said, we keep seeing folks step down from core teams 20:08:59 dhellmann: I tried, it's not as clear-cut as I thought it could be 20:09:05 johnthetubaguy: ttx: on the docs one, some parts of the docs is moved back to project repos, which can lead to changing numbers regarding the OS Manuals activities too 20:09:09 though I've also noticed a few teams enrolling new members 20:09:17 ildikov: yes, good point 20:09:24 on a positive side, a lot of projects still have good progress 20:09:47 ildikov: yeah, thats what I was attempting to say above after remembering it, its tricker/better than it looks 20:10:03 it would also be interesting to see if the individual contributors who are doing less work in project X are doing more work in project Y, so focus is changing (versus leaving entirely) 20:10:23 johnthetubaguy: yeap, more complex to follow that this way 20:10:23 The only project that is seriously in danger is Designate imho, so maybe we could communicate a bit more about it 20:10:26 that would be more complicated ot produce, though 20:11:06 * flaper87 will look at the data in more detail 20:11:21 * dims spots the drop in "Fuel" 20:11:27 ttx: I think some of those longer term trends show projects in trouble, too 20:11:27 Once we reach the holidays I'll refresh the data 20:11:35 dhellmann: yes 20:11:45 just not as suddenly 20:11:50 that should give us a better sample 20:12:05 Anyway, I thought you would appreciate a heads-up 20:12:17 Next topic ? 20:12:23 yeah 20:12:23 thanks ttx dhellmann this is definitely helpful 20:12:26 ttx: thanks for the data 20:12:29 beside recent layoffs, do we know other reasons of such changes? 20:12:38 pretty much layoffs 20:12:45 I've also seen people moving their focus on something else 20:12:53 EmilienM: foir the Ocata oddities, definitely the "staffing changes" 20:13:12 from infra's perspective, it's almost all employment-related challenges 20:13:16 "staffing changes" is probably more accurate 20:13:21 for the larger trends, disaffection for boring infrastructure plays a bit 20:13:33 yes, it's not just folks losing jobs, some companies are keeping their engineers, but putting them on other projects 20:13:45 right dhellmann 20:13:47 ++ 20:13:50 ok, back to our schedule 20:13:51 so we are certainly "less cool" in the hype curve sense 20:13:55 #topic Do not allow data plane downtime during upgrades 20:14:03 I think it was renames since 20:14:09 #link https://review.openstack.org/404361 20:14:11 renamed* 20:14:19 dolphm: o/ 20:14:30 I think the recent split version is technically fine... 20:14:39 I'm just struggling understanding *what* that level of granularity really brings us 20:14:41 yes, thanks for doing that 20:14:45 like the motivation behind adding it 20:14:52 o/ 20:14:56 Could you give an example of a deployer question that tag would answer ? 20:15:04 Or which project we expect to strive to reach that specific level (rather than directly reach for zero-downtime or zero-impact upgrades) 20:15:30 i see them as parallel paths for projects to pursue 20:15:44 so, assuming your project has basic upgrade capabilities... 20:15:55 ah, classify them in terms of backend effects rather than frontend effects ? 20:16:42 the next two steps you can pursue in parallel are either rolling upgrades (intermixing service versions) OR testing to ensure that your "controlled resources" (VMs, networks, data storage, etc) do not become unavailable at any point during the upgrade process 20:17:04 ok, got it 20:17:18 i like the step-by-step approach dolphm 20:17:19 other questions ? 20:17:29 I got a little confused in the discussion of neutron issues 20:17:40 * flaper87 had the same question as ttx and it's now answered 20:17:41 it sounds like some backends are always going to be disruptive during upgrades? 20:17:42 except, we already test controlled resources work when the control plane is offline 20:17:55 if that's the case, would neutron every be able to have this tag? 20:18:26 dhellmann: neutron or not, there are likely pluggable choices in every project that will cause you to sacrifice certain features of the upgrade process (feature support matrix, anyone?) 20:18:34 dhellmann: I believe that's backend specific 20:18:42 dhellmann: I guess the question is whether that's a product of neutron architecture, or backends 20:18:52 ovs has specific issues when restarting iirc 20:18:58 mordred : right, but the tag only applies to a deliverable, with no metadata to indicate that it may not always work as advertised 20:19:06 one could argue that dropping packets is thing healthy networks do :) 20:19:10 dhellmann: that's an _excellent_ point 20:19:13 and it applies to type:service 20:19:24 dhellmann: my thinking is that if there is *a* well tested, ideally default configuration upstream that can satisfy all these requirements, then the project deserves the tag 20:19:26 yes, I don't think we guarantee no packet loss even in normal operation without an upgrade in progress, do we? 20:19:46 yeah, the tag used to be there is *a* way and its clear what that is 20:19:46 dolphm : ok, I haven't had a chance to review the latest draft, is this issue covered in the text? 20:20:01 also depends a lot on how far into the controlled infrastructure those guarantees extend 20:20:03 maybe by saying that any caveats have to be documented? 20:20:14 dhellmann: the notion of only requiring a single happy path? no 20:20:36 dolphm : "at least one happy path" 20:20:42 I do have concerns about splitting this out - https://review.openstack.org/#/c/404361/3/reference/tags/assert_supports-accessible-upgrade.rst - because we already do that in default testing, unless I'm misunderstanding 20:21:00 dhellmann: (no, but) i think the notion of a happy path applies to all the upgrade tags, not just this one 20:21:09 sdague: certainly plan A was to keep it together, but there were issues or "redefining" the existing tag 20:21:18 http://docs.openstack.org/developer/grenade/readme.html#basic-flow 20:21:23 s/or/of/ 20:21:31 while i'm not a fan of the "control plane" and "data plane" terminology, it does seem a bit out of place to make data plane guarantees about projects which are mostly control plane 20:21:39 dolphm : ok, that's fair. Maybe we can get that clarified in an update 20:22:03 johnthetubaguy: it's not really redefining, it is being explicit a thing that was an implicit piece of this 20:22:07 fungi: perhaps projects need to be able to apply for a tags with a "not applicable to me" status? 20:22:09 sdague: there may be a gap between "verify resources are still working after upgrade" and "verify resources were not changed in any way after upgrade" 20:22:30 and "verify resources are still working *during* the upgrade" which is what this says, right? 20:22:38 ttx: their may be, but the intent was that was the check we've been running for 4 years 20:22:43 dolphm: hrm... maybe i guess 20:22:43 maybe 3 years 20:23:17 sdague: do all projects check that ? 20:24:01 ttx: not afik 20:24:09 ttx: right now, I doubt it 20:24:14 There are two dimensions: availability and integrity 20:24:16 but the intent is there 20:24:31 We test availability and that is what dolphm baked into the base tag 20:24:34 intent would be sufficient if we didn't already have a bunch of projects claiming the existing tag 20:24:39 The other tag requires integrity 20:24:55 (i.e. the resource has not been altered) 20:25:13 but fungi has a point 20:25:31 it bleeds a bit into data-side implementation 20:25:34 fungi: so the reason that we need to make those guaruntees, is that they are easy to screw up 20:25:53 it's sort of like having a tag that says nova upgrades won't cause it to tell libvirt to reboot your instances, i guess? 20:26:04 fungi, or delete them 20:26:07 yah 20:26:09 fungi: yup 20:26:10 or pause them 20:26:15 which it did back in essex at times 20:26:32 (Timeboxing this to 5 more minutes, since it feels like it could use a bit more iterations on the review) 20:26:43 because replumbing in existing stuff and not modifying it needs to be front and center 20:27:09 so this ends up being more about making sure that control plane services don't tell data plane services to take destructuve actions during a control plane upgrade 20:27:32 right, that consumer workloads in your cloud will be fine as you upgrade your openstack infrastructure around them 20:27:33 fungi: that's my understanding 20:27:44 fungi: sdague: mordred: ++ 20:27:54 whereas we can't make general guarantees about those data plane services themselves 20:28:02 "end user created resources still continue to function during and after upgrade"? 20:28:04 since we don't produce them 20:28:19 which gives you a lot more confidence that you can upgrade your cloud without destroying all your users 20:28:58 so it's not "libvirt won't reboot your instances during a nova upgrade" and more "nova won't tell libvirt to reboot your instances during a nova upgrade" 20:28:59 Although ideally user services cannot identify an upgrade is happening, reducing the chance a crash there is upgrade-related. 20:29:16 overall, the spirit of this whole effort is basically "i should be able to upgrade openstack continuously without impacting my customers / workloads / etc" 20:29:21 fungi: thats my take 20:29:23 fungi : right 20:29:25 right dolphm 20:29:27 dolphm: yeh, definitely 20:29:29 ok, looks like we can iterate on the review and converge to something around "no destructive actions" 20:29:34 dolphm: +1 20:29:54 dolphm: Maybe iterate on the review and come back next week ? 20:29:55 ttx: ++ 20:30:16 I just want to make sure that we don't make our taxonomy of that so complex, given that "upgrade that destroys my workloads" isn't really worth even talking about 20:30:21 ok, thanks, I understand it a lot better now 20:30:41 sdague: +1 thats totally a worry 20:30:49 that's my basic objection, I feel like it makes the upgrade tag pointless because it doesn't give the fundamental table stakes we should expect 20:31:21 might be worth the cost of dropping the tag from everyone and making them re-apply 20:31:30 sdague: i feel that's the slipperly slope i'm on with this new tag, especially because it's no longer a linear series of milestones (at least, not necessarily) 20:31:31 and I'd rather clarify that the the table stakes are real, and prune some projects from it if we have to 20:31:50 then you fall in the "redefine existing tag" rathole 20:31:57 ttx: sure 20:31:58 i could get behind merging this expectation into the normal upgrade tag 20:32:07 but since when are all tags idempotent? 20:32:15 but yeah, let's continue that discussion on the review 20:32:15 "expectation we failed to call out explicitly" 20:32:33 Having an idea of which projects would be dropped would help 20:32:36 fungi: yep, we made an oversight, lets fix it 20:32:41 well, all of them 20:32:48 it's not that all tags are immutable, it's that this appeared to add a large new expectation to an existing tag without documentation that all of the projects using the tag met the requirement 20:32:58 dhellmann: ++ 20:33:00 as none of the test that everything works *during* the upgrade 20:33:04 dhellmann: ++ 20:33:11 none of them* 20:33:18 mugsie: it's testing, nothing tests everything 20:33:33 testing is about representative use cases and verifying them 20:33:47 see: halting problem :) 20:33:48 nova's grenade test does not test a vm is accessable *during* the upgrade phase 20:33:52 we could set them up as separate tags with the goal of removing the simpler version once it's all obsoleted 20:33:56 mugsie: yes it does 20:33:57 having some tests before making this change might still be in order 20:34:00 during? 20:34:10 mugsie: define during 20:34:14 sdague : while the new version of nova is being installed and started 20:34:23 including any database changes 20:34:25 while the nova services have the code replaced and restarted 20:34:48 dhellmann: so... I don't know that you can build infrastructure that guaruntees that because you are racing 20:35:06 ok, I'll have to cut this one short and ask to continue on the review. This is clearly not ready for immediate merging anyway 20:35:06 sdague : which may not be a big deal for nova, but it appears to be *the* case where neutron would fail to meet these requirements 20:35:08 we test pre shutdown, during shutdown, post upgrade 20:35:24 dhellmann: yeh... the neutron ovs issue is one that would need thought 20:35:26 ttx: ack, let's move on 20:35:37 part of it is going to require architecture inspection + testing 20:35:48 #topic Driver teams: remaining options 20:35:50 to know if the tests are an acurate reflection of reality 20:36:06 stevemar distilled the discussion from last week down to 4 remaining options: 20:36:10 #link https://review.openstack.org/403826 (fallback) 20:36:14 #link https://review.openstack.org/403829 (grey, amended) 20:36:16 #link https://review.openstack.org/403836 (soft black) 20:36:19 #link https://review.openstack.org/403839 (soft white) 20:36:27 I did a pass at optimizing "grey" to try to address fungi's concerns with it 20:36:32 (really 3 options, fallback is just that) 20:36:37 (basically reducing the risk of it being abused as a registry of drivers where you want to place brands) 20:36:47 Not sure what's next step 20:36:48 yeah, i'm +1 on it now. while still not my preference, i can see it working out 20:36:49 I could quickly set up a Condorcet to help us order those 20:37:00 if you feel that's sueful 20:37:31 mmh 20:37:35 or just vote on them 20:37:40 what about we all vote for our preference first ? 20:37:57 and eventually vote for the second prefered option 20:37:58 yeah, let's try that -- setting up that condorcet with all your email addrseses is not fun 20:37:59 might be worth seeing what neutron/cinder/nova folks feel, or the folks working on the drivers 20:38:01 preferred* 20:38:14 flaper87: yes 20:38:14 stevemar: I'm not saying approving it 20:38:20 of course not 20:38:22 I'm saying having a good candidate 20:38:26 o/ 20:38:26 stevemar: can you encourage them to respond to the ml thread or jump in here? 20:38:40 stevemar: especially neutron folks, as this is driven a bit from that community 20:38:41 fungi: smcginnis chimed in last week 20:38:44 the point of bringing it to the ml first is so they could weigh in more easily 20:39:14 yep, smcginnis did. but in general feedback on the ml was pretty minimal 20:39:27 most responses were from tc members :/ 20:39:33 just re-iterating the fact that if we're coming up with a solution then we should include the people it'll affect. i can certainly try to encourage them 20:39:41 I think it makes sense for the tc to vett the options here, then have the teams chime in on a ML thread. 20:39:54 Once things are narrowed down a little, that might help focus the dicussion. 20:39:59 ttx : can we check if anyone still wants to pursue the 2 soft reviews? if not we can prune them 20:39:59 the grey option seems to have 7 votes already 20:40:01 smcginnis: we already did a first pass, there are 3 options now-ish 20:40:05 so from a Nova view, it says using established plugin interfaces, Nova doesn't have one for drivers, which keeps things as they are today (in a good way) 20:40:06 grey has a majority of +1 now, so we could refine it over the week and push it back to the thread for further discussion ? 20:40:18 ttx: was writing just that 20:40:32 maybe if we go forward with a non-binding vote and ping the ml thread with a "this is what the tc is leaning toward" update... 20:40:37 fungi: yes 20:40:40 stevemar: Yep. Just thinking it might be good to start a thread saying the TC sees options x or y as possible, we'd like team input. Or something like that. 20:40:41 ttx : ++ 20:40:46 ttx: sooooo ... combining our thinking about this topic with the previous topic ... 20:40:55 fungi: ttx ++ 20:41:04 ttx: yeah, its probably easier to present one choice instead of three 20:41:04 re: upgrades ? 20:41:05 I could see a point in the future where we might want to be able to give a rolling-upgrade tag to drivers 20:41:07 yah 20:41:30 like "nova has no-downtime upgrades as aproject, and libvirt and xen also do as drivers, but nova-docker doesn't" _for_instance_ 20:41:31 mordred: the trick is in-tree drivers, would all have to pass 20:42:00 (a worthwhile goal, but maybe would hinder in-tree drivers a bit) 20:42:02 ttx: no clue - this is purely an inthefuture thought - but I'd think we'd want to be able to enumerate and tag them 20:42:03 yeah, tags are per-deliverable or per-team 20:42:09 yup 20:42:14 we should not block anything on this thought 20:42:20 just a thought I had for future mulling 20:42:21 we'd need to adjust the tag data model further 20:42:24 OK, I'll take that one back to the ML 20:42:36 because it might take a while to figure out :) 20:42:38 I think that if reality is complicated, and we need to break out descriptions of what things work with what drivers, that's fine. We need to not be too stuck on existing boundaries of tags. 20:42:59 yeah, I'd rather we just use lots of words to say things where binary flags fall short 20:43:02 #action ttx to push "amended grey" back to the ML for final discussion before approval 20:43:31 moving on 20:43:35 there's a bit of feedback from JayF about the API restriction causing issues for ironic, too 20:43:45 tags, like anything else, are high level approximations of reality. They are fine as long as you realize they are low fidelity approximations, and dangerous the moment you believe your frictionless spherical elephants are real :) 20:43:52 #topic Relaxing IRC meeting policy 20:43:59 There was a thread about IRC meeting rooms, which concluded with: 20:44:03 #link http://lists.openstack.org/pipermail/openstack-dev/2016-December/108604.html 20:44:11 Some of the proposed actions may increase the silo-ing in some teams, so I wanted to run those past you first 20:44:21 First action is to remove artificial scarcity by creating more meeting rooms, which will result in more concentration of meetings. 20:44:33 Pretty much everyone agreed that's a necessary evil now, so we went ahead and created #openstack-meeting-5 last week 20:44:42 Second action is to remove the requirement of holding team meetings in common channels 20:44:48 (and allow team meetings in team channels) 20:44:57 This one is slightly more questionable, as it reduces external exposure and reinforces silos 20:45:06 So I wanted to check that it was OK with you before proceeding (as you were mostly silent on that thread) 20:45:18 I'm not really a fan of that second thing 20:45:30 for the reasons you mentioned 20:45:44 (alternatively, we could try first part and wait a bit before doing anything on the second part) 20:45:44 I also don't idle in all the project channels :) 20:45:55 I mean ... I'm also not a fan of the second thing - but the Big Tent has brought us enough teams that have effectively no interaction that forcing folks into arbitrary meeting rooms also seems weird 20:46:05 mordred: yeh 20:46:09 given timezone differences, and everything else, as long as its logged and open, we have got the main things 20:46:09 mtreinish : those of us working across a bunch of things don't like it BUT those who exclusively live in a couple of channels would love it 20:46:12 however, I do get pinged in random meetings sometimes 20:46:17 mordred: +1 20:46:18 mtreinish: it causes a whole lot of scheduling pain for a "regular" meeting channel 20:46:27 I'm pinged in random meeting rooms about twice a day 20:46:45 at least that many for me as well 20:46:46 I also am not sure the expecting people to idle everywhere is the right expectation anyway 20:46:49 people who want to join a meeting just /join the channel and that's it? how is it a problem and how does it create silos? we should let people working together and avoid private meetings, that's all 20:46:52 I think if you are having the meeting in a project channel you simply will not be able to assume random folk will see a ping 20:46:52 ttx: mordred wouldn't that mean you get pinged in project channel rooms instead ? 20:46:53 I'm probably pinged once a week or so; sometimes more 20:46:57 it feels people will miss my awesome insights by hiding in channels I don't lurk in :) 20:47:03 dims: right, that's the reinforcing silos thing 20:47:14 ttx: lurk in more channels ;) 20:47:17 stevemar: but would miss them 20:47:21 The only weirdness about holding a meeting in a project channel is that it makes for a very strange experience should a first time user drop into IRC mid-meeting. 20:47:21 sdague : ++ 20:47:26 sdague: yeah, if nothing else, it feels very timezone silly to me 20:47:32 JayF : true 20:47:33 stevemar: difficult to keep track 20:47:44 i also get pinged many times a day in random project channels too, yes, but only high-profile ones i happen to idle in (probably others too i just don't see them since i'm not there) 20:47:45 JayF : yes, that's a concern I had, too 20:47:47 I think new teams or teams applying for big tent should be in a meeting channel until they have some momentum 20:47:57 dtroyer, dhellmann: imho, if you miss a ping, that's not a big deal. Most of our problems can be solved async. versus during an irc meeting 20:48:02 If someone needs to jump out of a meeting to ping someone in that person's specific project channel asking them to join - I don't see that as any kind of major burden. 20:48:16 Rockyg: i was thinking the opposite :) 20:48:22 EmilienM : yeah, I'll start expecting more email on the -dev list :-) 20:48:22 smcginnis: feels almost like a PTG 20:48:27 :) 20:48:41 going back to the time when most everything happened in #openstack-dev, #openstack-meeting provided a quiet haven to host an irc meeting without people popping in interrupting with off-topic randomness 20:48:42 honestly, it wouldn't be bad if that meant the expectation wasn't that you were in 4 meeting channels, but instead you were in #openstack-dev when active, and that was the common ping bus 20:49:00 sdague: +1 20:49:02 sdague: +1 20:49:05 agree sdague 20:49:07 sdague, +1 20:49:09 sdague: good point 20:49:11 maybe we could relax one and reinforce the other 20:49:15 TC should help projects to work together. If teams want to use their own channels, go ahead. I don't think we should make a rule for that 20:49:29 EmilienM: it already is a rule 20:49:31 I'd bee ok with projects having meetings in their own channels if possible 20:49:33 EmilienM: not a rule, just allowing it (currently irc-meetings prevents it) 20:49:35 I also kind of wonder if part of the issue is the awkwardness of drilling into meeting archived content 20:49:40 and then for random pings just use -dev 20:50:06 ++ flaper87 20:50:32 i'd strongly discourage teams having meetings in their own channels, but expect that many of them who think it's a cool idea at first will switch to wanting to have them in a separate channel from their general discussion channels eventually 20:50:34 ok, I think we can proceed with relaxing the gate check at least and altering MUST -> SHOULD in some literature 20:50:39 The reason I'd like new eams to be in meeting channel is they tend to be more single vendor. I;ve see good things happen when a veteran openstacker jumps into the middle of one of those meetings because of trigger words 20:50:40 (I'm personally not in love with IRC meetings, specially in distributed teams. I think most of our problems can be solved by email or gerrit) 20:50:42 sdague++ 20:50:49 irc logs are not great 20:51:05 and i don't want to see us adding #openstack-nova-meeting and #openstack-cinder-meeting and so on 20:51:08 Rockyg : you mean why is this random person messing with us? :) 20:51:26 Uh, yeah, that's it, dims 20:51:32 Anyway, if you care one way or another, please chime in on thread, otherwise i'll probably go ahead and implement it 20:51:34 EmilienM: yup 20:51:48 We need to move to Open discussion, a few things to cover there 20:52:02 #topic Open Discussion 20:52:05 Rockyg: ++ 20:52:12 1/ Joint BoD+TC meeting around PTG 20:52:19 The Board would like to have a Board+TC meeting around the PTG 20:52:24 They propose to hold the joint meeting on Friday afternoon 20:52:29 Since most teams will run for 2 or 2.5 days anyway, I think that's doable 20:52:36 We could push back and ask to hold it on Sunday afternoon instead, but I'm not a big fan of 6-day work streaks 20:52:48 Also it's not really an issue if some TC members end up being busy elsewhere 20:52:56 not as if all board members would be present 20:53:09 obviously this impacts attendees who panned to work on vertical team stuff for the full 3 days 20:53:13 opinions / strong feelings ? 20:53:13 Sunday is usually the day spent on travels :-/ 20:53:18 er, planned 20:53:20 afternoon means one more hotel day for me, I gess 20:53:28 fungi: right 20:53:29 EmilienM: I was thinking that too 20:53:31 Given that choice, I'd much prefer Friday 20:53:33 johnthetubaguy: sunday too probably 20:53:43 was waiting for a decision on this to book flights... 20:53:48 fungi: not so many teams plan to do full 3 days 20:53:53 if we're going to have one of these co-located friday afternoon, it would be great if it was treated as important on the board side - it often feels like we lose people for the TC section of board meetings 20:53:56 it also means most of people would need to travel on Saturday. 20:53:56 good to know 20:53:59 ttx: well, the nova team does 20:54:00 ttx: true 20:54:06 and we 20:54:06 but yes, some will and I guess it's fine to prioritze that over the joint BoD/TC 20:54:11 ++ mordred 20:54:25 we've got a few nova core members that are in this pool :) 20:54:27 mordred: I can make that clear 20:54:31 mordred: in this case instead of leaving mid-day and skipping the joint meeting, those people will just not come there at all i guess? 20:54:36 if we're having it on the PTG, I guess I'd prefer it on Friday 20:54:43 +1 for Friday 20:54:46 this is the first PTG and I'd like to first see how it pans out 20:54:48 i'm okay with sunday afternoon only because it's easy to get to ATL; but friday works too 20:54:49 I'm going to be tired and would prefer to go home, but will show up at the meeting. I'll be annoyed if I show up at the meeting and it's a ghost town. nobody wants a grumpy monty in a room 20:54:50 instead of adding more days to it 20:54:53 or is the board planning to have a board meeting before the joint meeting again too? 20:55:04 fungi: they will yes 20:55:13 mordred: right, it does feel like a Friday afternoon is going to be sparse 20:55:20 (can't we make it during an evening in a piano bar?) :-) 20:55:27 sdague: they come to town exclusively for that day though 20:55:28 ++ EmilienM 20:55:30 from what I saw, most of the board wasn't planning to come to the ptg for any other reason 20:55:33 EmilienM: with some rum 20:55:34 oh, so friday attendees get to choose between vertical team ptg stuff and board meeting as well 20:55:53 sdague: so I don't think they would come to ATL just for the morning 20:56:03 but who knows 20:56:15 OK, I'll communicate that back. It's not mandatory anyway 20:56:16 so, personally either I guess is fine. The flights from here are pretty direct. As long as we nail it down soon 20:56:19 fungi: you, stevemar and me are PTL, it would be hard for us to make a choice :-/ 20:56:30 I'm slightly more in favor of Sunday, but I understand the objections to the long week that would cause. 20:56:47 We can sync before so that if you are stuck in a room, your views are represented 20:56:48 dhellmann: yeh, I would say I would lean Sunday 20:56:52 ttx: I know it's not mandatory but if it'll happen, it kinda feels that way 20:56:54 EmilienM: at least my team's assigned ptg days are monday/tuesday (yours as well?) harder for stevemar 20:56:55 if you know what I mean 20:56:56 +1 on knowing soon, I expect most folks are booking pre new year, I believe I am meant to 20:56:58 dhellmann: I'm leaning the same way 20:57:04 I feel it's part of my job as TC member to attend 20:57:08 and I want to be there 20:57:10 what about Monday? 20:57:18 its a cross project thing? 20:57:20 monday is bad for horizontal team things 20:57:26 fungi: it's just a bit of time anyway, i can break away from keystone-y stuff for a bit 20:57:29 johnthetubaguy: the horizontal stuff only has 2 days, can't burn one 20:57:34 mordred: true 20:57:40 EmilienM: though ptg is also after ptl elections, so maybe none of us will be ptls by then ;) 20:57:41 johnthetubaguy : a different set of TC members wouldn't be able to attend in that case :-) 20:57:42 ok, I'll start a -tc thread on that 20:57:42 fungi: we are verticial 20:57:44 2/ Progress on Amend reference/PTI with supported distros (https://review.openstack.org/402940) 20:57:48 EmilienM wanted to unblock this review 20:57:51 fungi: that's the hope! :P 20:57:52 fungi: who knows? :-) 20:57:57 ttx: I guess the second question is is this a normal 3 hour cross section? Or are we talking about the whole day? 20:58:01 stevemar: talk for you :P 20:58:02 #action ttx to start a thread on the -tc list to see what is most popular 20:58:12 EmilienM: oh, for some reason i thought the deployment teams ended up on monday/tuesday as well 20:58:13 sdague : half day 20:58:13 :) 20:58:18 2-5 on Friday is one thing, all day friday is different 20:58:18 * dims will be missing kid2's bday 2 years in a row 20:58:19 sdague: Friday afternoon 20:58:23 ok 20:58:24 fungi: it was and it changed 20:58:24 or Sunday afternoon 20:58:34 ttx: thx for bringing it up again 20:58:35 EmilienM: i should pay closer attention 20:58:47 ttx: I see zero blocker now for this change to be accepted 20:58:48 my main concern is that I'm looking forwrad to the PTG being super productive - and I don't want to fall into the trap of turning it in to a second summit by cramming additional things in if we can help it 20:59:03 mordred: yeh... 20:59:09 mordred: yes, not looking forward another 6-day thing 20:59:12 we did try to make this different 20:59:17 which is why I lean towards Friday 20:59:18 fungi: no worries, we changed it very recently. Main reason: tripleo is not horizontal and we need to attend horizontal sessions (infra, etc) 20:59:18 mordred: my thoughts exactly 20:59:20 ttx: or a 5 day thing fo being double book 20:59:37 ttx: do we have the option of just saying "no, we'll be busy all week"? 20:59:38 honestly, I think my actual preference is not to do it at all there :) 20:59:49 EmilienM: difficult to switch focus 20:59:50 is it crazy to consider a virtual joint meeting instead? 20:59:51 ttx: sounds like we're running out of time. Maybe next meeting 21:00:01 ttx: or maybe I'll ping people to review it. 21:00:05 EmilienM: yeah, that 21:00:12 that way we can finalize it next week quickly 21:00:29 flaper87: same for https://review.openstack.org/398875 21:00:47 ttx: sounds good 21:00:52 Also if someone could tell me how useful https://review.openstack.org/406696 is, I would appreciate it 21:00:54 ppl, read ^ 21:00:54 dhellmann : ++ 21:01:03 And we are out of time 21:01:11 o\ 21:01:24 ttx: bonne nuit! 21:01:27 #action ttx to finalizer the Friday/Sunday decision in a -tc ML thread 21:01:28 do we have a tentative agenda for the meeting? wondering if this is to continue the board discussion on accepting other languages, making emerging trendy technologies first class citizens and restructuring project governance, or other stuff 21:01:33 oh, out of time 21:01:37 #endmeeting