20:03:24 #startmeeting tc 20:03:25 Meeting started Tue Mar 11 20:03:24 2014 UTC and is due to finish in 60 minutes. The chair is markmc. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:03:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:03:28 The meeting name has been set to 'tc' 20:03:32 o/ 20:03:34 lifeless: o/ 20:03:38 tonight's agenda 20:03:38 #link https://wiki.openstack.org/wiki/Governance/TechnicalCommittee 20:03:45 o/ 20:03:51 first up is DinaBelova :) 20:03:52 #topic Climate incubation request 20:03:59 o/ (Climate core dev) 20:04:00 #link http://lists.openstack.org/pipermail/openstack-tc/2014-March/000548.html 20:04:20 DinaBelova, maybe we could start by you summarizing the feedback you received on the thread so far? 20:04:27 ok, cool 20:04:50 so first of all, as I understood from ML this idea seems quite interesting to many people 20:04:54 but the is one moment 20:04:57 #link http://lists.openstack.org/pipermail/openstack-dev/2014-March/028646.html 20:05:04 noticed by almost everyone 20:05:19 now there is no program that could accumulate CLimate idea 20:05:24 and not only climate 20:05:56 as I got from ML there is a good change that probably in future resource time management + scheduling + .. might have one program 20:06:40 so here you may see interest to create one program for climate, gantt, all other possible projects working with resources allocation/reclaiming 20:06:59 speaking about requirements, Climate fit well 20:07:16 problem is with place to set it OS - I mean gap in the program 20:07:34 Gantt is pretty young for standing by its own program 20:07:49 it could be Reservation program for Climate - but it looks it's wrong way looking on comments 20:07:58 bauzas, I mean possibly :) 20:08:06 as said in many comments 20:08:22 so the summary might be that a program scope based on reservations is too narrow 20:08:37 markmc, it looks so 20:08:40 and you should attempt to collaborate with those interested in the wider scheduling area ? 20:09:03 we already began collaborating with Gantt 20:09:33 though gantt is pretty much dead right now ... the current work is back in nova 20:09:38 at the moment, that's draft discssions 20:09:42 well, I think that projects responsible for different scheduling types still should be different projects, but now it seems that there should be separated program for it 20:10:08 i do think a scheduling program makes sense longer term, but we don't have a scheduler project that can fit there yet 20:10:36 but the people in the scheduler program could just coordinating with nova to help prepare for thta 20:10:54 russellb: that's the idea we're following 20:10:59 ok 20:11:14 russellb: we're keeping track on what's happening with Gantt 20:11:25 (I mean the forklift effort) 20:11:26 so If Gantt is pretty much dead, we can start resource scheduling program with only Climate in it (scheduling == reservation) with possibility to consume more scheduling projects in future 20:11:31 I think we've been talking about the need for a cross project scheduler for a while, that we should probably assume thats the direction things end up heading 20:11:31 if the scheduler work is going on in nova, would it make sense to add climate's idea of reservations to nova directly? 20:11:42 markmc: i've been thinking about that 20:11:43 sdague: +1 20:11:43 sdague: +1 20:11:51 markmc: that seems like a better idea than being separate 20:11:56 I'd like to have our default answer stop being "add it to nova" 20:12:00 DinaBelova: what other resources does climate aim to do reservations for? 20:12:03 right now it just does VMs? 20:12:22 dhellmann, let's equally not assume it's always the wrong answer, though 20:12:24 aside from the usual "please don't add more to nova" concern, why should we *not* implement this in nova? 20:12:29 markmc: sure 20:12:37 I see reservations and scheduling as being tightly connecteed 20:12:39 russellb, Vms and compute hosts. But in nearest future we're planning volumes and storage nodes reservation 20:12:40 markmc: but instances aren't the only things we need to schedule, are they? 20:12:47 dhellmann: I think that's fine, however it doesn't seem like we've got enough drive to get this thing stood up on it's own 20:12:48 russellb: actually it seems to me that if it even belongs it likely belongs in nova 20:12:52 russellb: seems that there a resources that aren't owned by nova one could reserve 20:12:59 dhellmann, instances aren't the only thing we have e.g. quotas or scheduling for, either 20:12:59 you want to be able to describe a large topology and essentially schedule it across all APIs at once 20:13:06 but I'm struggling to see if there's a real need here, and how you actually do it succesfully 20:13:15 dhellmann, yes, we plan more different resources to reserve 20:13:22 russellb: because cinder, neutron. 20:13:23 and if there are resources elsewhere, should the reservations be a part of the API that owns those resources? 20:13:34 with orchestration being more prevalent, it seems useful to schedule across apis 20:13:37 should this be a type of thing we should be implementing in each API? 20:13:39 markmc: right, I said schedule :-) 20:13:50 holistic scheduling and holistic reservations are pretty linked, no ? 20:13:53 It seems like this breaks existing limits/quota models 20:13:54 dhellmann, sorry, I read reserve :) 20:13:55 DinaBelova: I do think a bunch of what climate was proposing could just enhance nova today and be useful 20:14:03 not that I'd be heart broken about that :) 20:14:09 and not putting it directly in nova doesn't mean making its own service -- it could be some shared library code 20:14:09 jgriffith: heh 20:14:26 jgriffith: right, quotas are located with the resources, so why not reservations? 20:14:36 dhellmann, +1 on "could be library code" 20:14:36 dhellmann: yes 20:14:43 dhellmann, it could evolve out of nova code, though 20:14:51 sdague, idea of Climate is also to provide notifications about leases events, workflows (possibly using mistral) of how resources should be managed that time etc 20:14:52 dhellmann, question is more about where the REST API is exposed 20:14:52 so it seems like there are 2 different things going on 20:14:58 but it sounds like DinaBelova and the rest of the climate team are working with the other interested parties, and aren't really ready for incubation yet, is that right? 20:14:59 DinaBelova: how mature is climate? Do you have it working for any openstack project at the moment? 20:15:14 markmc: evolution could work, yes 20:15:16 sdague, that's why I don't really believe that this stuff should be placed in nova 20:15:17 1) features that climate team wants beyond what's in nova today, which are mostly compute, which could be contributed to nova 20:15:36 2) and the global scheduler many of us want, that we are a ways away from 20:15:49 mikal, yes, for Nova 20:15:50 Can we take a step back for a second 20:16:02 let's not confuse this with the global scheduler either, which could quite likely be an internal only API 20:16:02 There's a lot of mixed conversation about "global scheduler" here 20:16:11 jgriffith: agreed 20:16:12 I don't see anything in climate that suggests that as a goal 20:16:15 related, but separate 20:16:23 russellb: kinda 20:16:25 yeah 20:16:29 jgriffith: so that's probably true 20:16:47 I'd like to focus on first does this even make sense? 20:16:54 there are multiple concerns here 20:16:54 the push back was that this kind of service really needed to be done in concert with a scheduler to make sense 20:16:54 time based reservations of resources 20:17:05 sdague: maybe, but I see other problems 20:17:08 sure 20:17:13 the functionality makes sense i think 20:17:18 first being whehter the idea is even sound IMO 20:17:25 it's really just stepping back and figuring out where it actually fits and where it should live 20:17:30 jgriffith: it sounds like you have reservations about the idea 20:17:32 second being integration with existing quotas and limits as I mentinoed 20:17:39 dhellmann: that's accurate 20:18:01 jgriffith: care to elaborate? 20:18:17 So firstly, I am trying to understand the value 20:18:28 Is there real value in in this? 20:18:31 the idea is that there are possibly mixed resource types that a lease can have 20:18:54 DinaBelova, I think I could summarize the value for jgriffith, but perhaps best if you do ? 20:18:54 I mean, the whole point is our resources are meant to be elastic 20:19:02 like we should want to reserve a volume and boot on it 20:19:03 markmc, ok 20:19:13 jgriffith: elastic but exhaustible 20:19:17 value both for users and operators 20:19:19 lifeless: indeed 20:19:27 but I'd argue this makes that problem even worse 20:19:27 jgriffith, idea is to potentionaly provide time based resource management to OS 20:19:31 jgriffith: (ask any bm cloud operator :)) 20:20:04 so I have tenants with time based reservations 20:20:14 what happens when the "time" is up and they need resources 20:20:17 jgriffith, so user will have opportunity just to reserve some reources in future, and cloud providers will know about future load picks to manage them correctly 20:20:26 how is this any different than today 20:20:30 jgriffith: time based reservations are really useful for things like HPC on openstack 20:20:30 jgriffith: if your orchestration job doesn't fail until you run out of some resource that wasn't reserved, sad user 20:20:33 or do you false reserve? 20:20:34 which a lot of folks are doing 20:20:44 when time is up, they want those things shot in the head 20:20:44 jgriffith: but I see it more in the quota arena really 20:20:50 jgriffith, it depends on configuration - like create snapshots for all VMs and kill them later 20:20:54 sdague: no, I mean the other way around 20:21:07 sdague: my understanding is they want to "reserve things for future use" 20:21:09 DinaBelova: so, looking at the prototype nova scheduler code for this makes me wonder. Why can't this just be expressed with aggregates? 20:21:16 jgriffith: I think you are saying 'deploy the thing rather than reserving the right to deploy it' ? 20:21:19 jgriffith: do you mean when the time at the start of the reseration period comes? 20:21:24 quota that's way to implement resource reservations, but not to manage them 20:21:24 jgriffith: yes, sure 20:21:34 lifeless: dhellmann what I'm saying is: 20:21:54 one use case that was given was reserve a resource for usage at some time in the future 20:21:59 so I interpretted as: 20:22:00 mikal, aggregates are only for compute reservations, but we target all types of resources 20:22:09 Hey... I need a medium instance on Monday 20:22:15 get ready... make sure I have it 20:22:29 I see nothing but trouble there 20:22:37 Having expirations on resources I get 20:22:45 that's fine and easy enough IMO 20:22:52 but belongs in the project owning the resource 20:22:54 funny, expiration is the part I don't get :-) 20:22:59 dhellmann: LOL 20:23:18 just auto-delete so people don't hang on to things 20:23:21 (time check - need to switch topic in a couple of minutes, say at 25min past the hour) 20:23:23 but I wouldn't do that with Cidner 20:23:25 cinder 20:23:36 yeah the more i think about it, the more i find it very odd from the API consumer perspective to have to go to another API for this 20:23:40 evaporating data isn't going to make for happy users 20:23:47 jgriffith: it might 20:23:51 i think we should look in more detail at what integrating this into nova itself would look like 20:23:55 we have idea to create reservations for diffrent resources types, so you'll have opportunity to prolong them all 20:23:58 russellb: or heat? 20:23:59 if they really need 20G for the next week 20:24:04 russellb: I agree. 20:24:06 to do some computation on 20:24:10 russellb: +1 20:24:18 russellb: yeh, I think I agree 20:24:22 I'd also like to understand how it plays with the existing limit and quota modeal as I mentioned 20:24:31 it sounds like there also needs to be a discussion about non-compute resource scheduling 20:24:32 it should be nova enhancements until proven that it can't be 20:24:33 I think there's potential for significant conflicts 20:24:35 as DinaBelova keeps pointing out 20:24:45 devananda: +1 20:24:49 jgriffith: I also agree this raises interesting problems with quota 20:24:58 devananda: ok, now we're talking scheduling again :) 20:25:01 * markmc tries to wrap up with a summary 20:25:05 right, but i think it's just a common problem we have to solve across existing APIs 20:25:07 jgriffith: russellb: the thing I think we haven't (as a group) figured out is how holistic /anything/ should fit in 20:25:18 e.g. holistic scheduling, or holistic reservations, quotas etc. 20:25:20 but I actually think that will become more clear with a first implementation in nova 20:25:31 lifeless: I'd argue we've swung to the other extreme 20:25:33 right here isn't the forum to figure that out 20:25:36 lifeless: service/project for EVERYTHING 20:25:37 #agreed climate developers encouraged to pursue their reservation ideas as an API addition to nova, explore tighter integration with future global scheduler 20:25:40 fair ? 20:25:45 yes 20:25:46 regardless of benefit 20:25:46 markmc: +1 20:25:53 markmc: yes... sorry 20:26:14 markmc: sounds fine for now; it doesn't make anything better or worse vis-a-vis future work AFAICT 20:26:27 cool 20:26:31 well, it looks like we'll have further discussions here :) 20:26:36 yep 20:26:42 we should have a cross project session on this in atl 20:26:46 DinaBelova, sounds like everyone is very interested to dig deeper into the details of the concept 20:26:50 we do need to figure out the story for big clusters with complex api deps so folk don't half-deploy 1000 machines before an error turns up 20:26:58 russellb, yes, please 20:27:01 DinaBelova, concept of reservations, I mean 20:27:02 but like I say, thats orthogonal 20:27:03 lifeless: +1 20:27:10 I think cross project session could help here 20:27:16 ++ to cross-project session on scheduling/reservations 20:27:30 and how that might potentially work with global scheduling idea 20:27:35 devananda, +1 20:27:45 * markmc moves on 20:27:48 #topic Savanna graduation review 20:27:48 #link http://lists.openstack.org/pipermail/openstack-tc/2014-February/000544.html 20:27:55 SergeyLukjanov, you're up :) 20:27:59 I'm here ;) 20:28:01 #link https://etherpad.openstack.org/p/savanna-graduation-status 20:28:04 s/Savanna/Sahara/ 20:28:06 markmc, thx 20:28:06 SergeyLukjanov, care you summarize where you're at 20:28:12 jeblair, exactly 20:28:27 what's changed during incubation, what the TC's feedback was to your original application, how you've dealt with the feedback ? 20:28:29 we're in the middle of renaming process, so, we'll release Savanna as Sahara in Icehouse 20:28:57 markmc, the most feedback during the incubation was about the heat usage and clustering 20:29:06 heat integration was fully implemented 20:29:17 and we're ready to amke it default for I release 20:29:25 (I think after renaming hell...) 20:30:05 re clustering it was discussed on summit with trove folks and it was decided to postpone this to the time when will have more porjects like trove and savanna 20:30:13 to better see the common part 20:30:42 as for about graduation requirements 20:30:55 as you can see in pad https://etherpad.openstack.org/p/savanna-graduation-status we think that all of them solved 20:31:04 we have a bunch of tests in tempest 20:31:20 and both sahara and it's client gating using them for several month 20:31:25 (voting) 20:31:44 here are the logs from the mid cycle graduation review - http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-01-14-20.02.html 20:31:54 SergeyLukjanov, on the async gate thing - how often do you see changes in other projects breaking sahara ? 20:31:54 quote: Savanna is in good shape too, some concerns about lack of diversity in contributors but might be a reflection of a niche project (ttx, 20:50:17) 20:32:01 #link http://eavesdrop.openstack.org/meetings/tc/2014/tc.2014-01-14-20.02.html 20:32:27 markmc, we have jobs on devstack/tempest and d-g 20:32:40 and it never breaks us for the time we're gating 20:33:16 re diversity, I've make a note about this in pad 20:33:25 Currently we have 52% of commits from Mirantis and this number is decreasing well (it was 65% on the mid graduation review in Jan, just 1.5 month ago!) 20:33:39 and it was about 90% in time of incubating 20:33:53 nice 20:34:06 #link http://www.stackalytics.com/?release=icehouse&metric=commits&project_type=openstack&module=savanna-group&company=&user_id= 20:34:22 seems like a nice mix of developers and companies, and a fair amount of changes 20:34:29 personally, I'm really happy that we're bringing folks from other ecosystems 20:34:36 like Hortonworks folks 20:34:45 to OpenStack eco 20:35:05 o/ 20:35:40 re release process - ttx manage our release starting from i1 20:35:50 successfully for 3 release 20:35:57 releases* 20:36:10 all very cool 20:36:15 any questions from tc members? 20:36:16 you guys seem like you've really had your act together 20:36:19 nice work 20:36:20 Just a question while reading the API docs, why is there an EDP doc? What's EDP? I get HDP is Hortonworks Data Platform. 20:36:29 oh, the new name "Sahara" was verified and approved by Foundation lawyers 20:36:39 russellb, thx 20:36:46 annegentle, EDP == Elastic Data Processing 20:36:54 yeh, agreed, the team has really stepped up 20:36:55 annegentle: EDP is feature of Sahara 20:37:11 annegentle, it's our codename for jobs/workloads management 20:37:16 key component I'd say 20:37:19 the savanna/sahara team has been on top of gating jobs, etc 20:37:25 elastic data processing right? 20:37:32 russellb, exactly 20:37:32 honestly, I think from a QA perspective they've done everything that they can with out infrastructure until we have some real multinode infra 20:37:36 oh you said that sorry 20:37:39 russellb: yes 20:37:48 "with our" 20:38:03 annegentle, re api - our 1.1 API consists of 1.0 api + EDP stuff 20:38:12 SergeyLukjanov: ok it'd be great to define that - not nitpicking at all, I think you've done a good job with docs 20:38:25 annegentle, that's why 1.1 doc contains only new stuff 20:38:38 annegentle, thx for tip, will do 20:38:41 the savanna devstack jobs have indeed in place and functional for quite a while, and the team is really on board with all of our processes 20:39:06 SergeyLukjanov, btw, you should propose a change to the openstack/governance repo to change status to integrated 20:39:13 I've taken it for a spin and it went fairly well 20:39:17 markmc, will do 20:39:17 sdague brings up a good point worth noting -- because of the poor support for multinode testing of heavy workloads, we aren't able to test all of sahara in the integrated gate at the moment 20:39:27 also have a good deal of feedback from customers using it so I'm good 20:39:39 that's no fault of sahara's, and mirantis is filling in the gap with 3rd party ci 20:39:41 jeblair: they have augmented that with 3rd party CI 20:39:43 jeblair: but that also also applies to features of some integrated projects too 20:40:01 jeblair: nova live migration for example 20:40:07 jeblair, sdague, we're threating savanna-ci as mandatory vote 20:40:22 markmcclain: sure, the point is from a judgement perspective they've done everything they can with our infrastructure 20:40:25 jeblair, sdague, additionally we're thinking about fake plugins that could be tested in the current gate 20:40:34 sdague: agreed 20:40:44 which I'll consider sufficient, as they did everything they could 20:40:54 plus more, by adding 3rd party ci to fill in the gap 20:40:57 yep, that's not a cricicism -- i'm just noting it, and agree that they've handled it well. 20:41:11 ok, when SergeyLukjanov submits the governance review, we can vote 20:41:30 it's probably worth ttx bringing it up again next week, just to make sure there hasn't been feedback in the interim 20:41:37 and to give his release management feedback 20:41:45 jgriffith, Hortonworks are leaders in Hadoop ecosystem, so, the fact that they are one of three contribs is a significant mark IMO 20:41:53 but we're looking good, AFAICT 20:42:03 SergeyLukjanov: I'd agree with you 20:42:05 markmc, it was a very good feedback from ttx on the mid cycle review 20:42:08 markmc: i agree on both points 20:42:34 nothing else on sahara for now? 20:42:38 oh, one more question - should I squash renaming with graduation CRs or not? 20:42:57 probably keep them separate 20:43:07 one thing: it looks like the API is built with flask, is that right? 20:43:13 not sure when ttx will feel it appropriate to merge the change to integrated 20:43:13 yeah, i'm guessing we can quickly approve (or even just have ttx approve) the renaming ones 20:43:23 perhaps it happens early in the next cycle? 20:43:30 i.e. sahara isn't integrated in icehouse 20:43:39 so, get the renaming merged first 20:43:44 markmc: right 20:44:03 we are queued up with reviews in the docs repos once the infra stuff is done 20:44:04 dhellmann, yup, Pecan/WSME will be used for v2 api (planned to J) 20:44:18 SergeyLukjanov: great, thanks for clarifying that 20:44:32 ok, moving on 20:44:36 #topic Integrated projects and new requirements: Neutron 20:44:36 #link https://etherpad.openstack.org/p/IcehouseProjectReviewNeutron 20:44:36 thank you all 20:44:40 SergeyLukjanov, thank you 20:44:45 SergeyLukjanov: good work! 20:44:50 where were we on this? 20:45:20 it got bumped from the agenda last week, right? 20:45:20 as requested I've proposed a mission statement 20:45:27 AFAIR - discussing whether, nowadays, the TC would require neutron to make parity with nova it's first order of business before graduating ? 20:45:53 #link https://review.openstack.org/79744 20:46:12 markmc: yea well I think we would require the project to spin out existing code vs starting from scratch 20:46:24 #info Neutron proposed mission - "To implement services and associated libraries to provide on-demand, scalable, and technology-agnostic network abstraction." 20:46:58 markmcclain, hmm, well we're not saying that to ironic 20:47:16 ironic is a bit of different case 20:47:26 markmcclain: not sure I understand what you're saying? 20:47:49 markmcclain: You mean spin out the the nova-network code somehow? 20:47:50 i think that's an implementation detail, we'd have to consider it case by case 20:48:01 if we were to start neutron today.. following the cinder approach would be the preferred path 20:48:02 i think the answer to the question is "yes", IMO 20:48:07 that way you get parity from day 1 20:48:40 yeah, i don't really have an opinion on whether the code should be reused or not. i do hold the opinion that i think feature parity and compatability are important for a component that intends to deprecate an existing component. 20:48:49 jeblair: +1 20:48:51 i think parity has to be goal #1 before adding *anything* else 20:48:51 jeblair: +1 20:48:52 that's the key 20:48:54 jeblair: +1 20:49:22 but from a time to success.. spinning out code and then iterating should be our preferred stance 20:49:24 so elephant in the room.... Is this going to happen in Juno? 20:49:31 or put it another way, graduating the new project should imply deprecating the old code ? 20:49:43 yes 20:49:44 the issue we're having is where we have the new project, but the old code isn't deprecated yet ? 20:49:49 yes 20:49:50 markmc: that I understand and vote yes 20:50:30 markmc: yeh, graduating a new project that should include the deprecation of what it's intended to replace, imo 20:50:41 and i think we have graduation requirements changes in place now to explicitly avoid it from happening again 20:50:54 agreed 20:51:01 so that's good ... though question is, what do we do with the current case 20:51:04 russell, want to quote the one that covers it ? 20:51:06 that's the basic approach I expect we'll take with oslo libraries, too 20:51:09 sure 20:51:37 paste bomb ... 20:51:39 * Scope 20:51:39 ** Project must not duplicate functionality present in other OpenStack projects, 20:51:39 unless the project has intentionally done so with the intent of replacing it. 20:51:39 ** In the case that a project has intentionally duplicated functionality of 20:51:39 another project, or portion of a project, the new project must reach a level 20:51:40 of functionality and maturity such that we are ready to deprecate the old 20:51:41 code and remove it after a well defined deprecation cycle. The deprecation 20:51:43 plan agreed to by the PTLs of each affected project, including details for 20:51:45 how users will be able to migrate from the old to the new, must be submitted 20:51:47 to the TC for review as a part of the graduation review. 20:51:51 russellb: at least you warned us 20:51:56 :) 20:52:07 deprecation plan agreed in order to graduate 20:52:16 plan might be a future one 20:52:23 we could just tighten that up a bit 20:52:26 but yeah, cool 20:52:34 yeah you're right 20:52:43 can we back up for a second if there's not something pressing here? 20:52:47 certainly was my intention that it's very concrete at that point 20:53:07 We've got some cool general statements and ideals but I'm more curious about reality 20:53:07 as in, old thing is deprecated now and will be removed in X 20:53:27 jgriffith, we've only got 7 minutes, would be good to wrap up this neutron review 20:53:34 the reality is that we've all but deprecated nova-network already 20:53:38 markmc: ha 20:53:39 ok 20:53:43 jgriffith: that's not accurate 20:53:44 sounds like it's on topic 20:53:48 carry on 20:54:07 jgriffith: not from a docs perspective at alll 20:54:07 we prematurely froze it, which caused problems, and dev is open again 20:54:14 but yeah, not accurate 20:54:18 russellb: ok... if that's not accurate I'm very confused because that was a concern/complaint several meeting ago by you 20:54:21 jgriffith: nor from operators 20:54:22 ok 20:54:23 the perception that we have is causing massive confusion for users 20:54:24 fair enough 20:54:26 but... 20:54:27 my point is 20:54:36 jgriffith: I think this was sort of the point of the piece above 20:54:36 right, we more recently clarified that it's back open because of the problems 20:54:41 we did a very confused thing here 20:54:47 do we actually have an actionable plan and a deadline to finally put this to bed in Juno? 20:54:47 and the confusion all of this has caused is another part of the problem here 20:54:58 and confused our users and ourselves, so we both need to make sure it never happens again 20:55:03 what do we want to have happen? finish the feature parity work? 20:55:06 jgriffith: i think that's what we need to clarify right now :) 20:55:10 * jgriffith is tired of running two stacks :) 20:55:11 and figure out how we move forward with neutron here 20:55:16 jgriffith: ++ 20:55:18 goal: resolve this in juno 20:55:22 but what if that doesn't happen? 20:55:30 russellb: can you restate that goal without using any pronouns? 20:55:30 neutron needs to be default, nova-net deprecated 20:55:31 so the biggest item left to tackle is multi-host 20:55:42 russellb: what is "this"? 20:55:44 we already have teams working on it for J 20:55:48 done.. move on in Juno IMO 20:55:55 markmcclain: there is also migration path 20:55:55 goal: get to where we can deprecate nova-network and have a clear migration path to neutron by juno 20:55:57 markmcclain: understood 20:56:04 I just want to actually have clear goals here 20:56:11 markmcclain: we've been saying that for several releases :( 20:56:12 and agreement from TC 20:56:15 sdague: so the migration path has flip-flopped over time 20:56:16 but what if that doesn't happen? 20:56:18 and neutron is still in the half of integrated projects that doesn't do upgrade testing 20:56:25 this is gonna sounds bad, but worth discussing 20:56:33 markmc: I have a suggestion for that 20:56:34 now that it is back we're working on how to render the current constructs onto Neutron 20:56:37 markmc: i think we have to, yes ... 20:56:42 which remains an issue for making that the default 20:56:43 why wouldn't we de-graduate until it was actually ready to deprecate ? 20:56:45 if it doesn't happen then we give up 20:56:55 there's still a FFE for migration path for ML2 plugin I think 20:56:56 markmc: that's what i've been thinking 20:56:56 nova-network is the defacto networking project 20:57:03 sdague: I spoke with the owner of the grenade ticket 20:57:23 he's working on tracking down why some services aren't cleanly shutting down 20:57:28 biggest concern is honestly PR, i think 20:57:28 fyi savanna renaming https://review.openstack.org/79765 and sahara graduation voting https://review.openstack.org/79766 20:57:41 jgriffith: ++ 20:57:51 markmcclain: I've been on that review and provided some feedback, it really looks stalled, fwiw 20:57:51 annegentle: the ML2 ticket is so that the neutron team can remove OVS and LB plugins from our tree 20:58:10 russellb: I think its more than PR 20:58:18 it did stall for a bit, but I spoke with him this morning about it 20:58:24 russellb: if we degraduate that means neutron goes asymmetric gating, and thats a *terrible* position to be in. 20:58:28 markmcclain: ok good-o 20:58:38 lifeless: we're already in a terrible position 20:58:38 lifeless: +1 20:58:44 russellb: you're forever in chase mode, its driving devananda batty with Ironic, and TripleO batty with everything. 20:58:45 markmcclain: (and that's more cross-project meeting talk really, sorry) 20:58:46 ok, we're almost out of time 20:58:48 but we could make an exception to the gating bit 20:58:50 lifeless: we're super close to running the full job 20:58:50 if necessary 20:58:50 russellb: so why make it worse or continue to suffer? 20:58:52 russellb: we are, but we don't need to maek it worse. 20:58:57 frustrating, but we're still not done here 20:58:58 then make a gating exception 20:59:05 anyone care to take this to a mailing list thread? 20:59:07 the whole gating thing needs a revisit I think 20:59:11 or just re-schedule it for the next meeting? 20:59:21 markmc: i can start a thread 20:59:25 tomorrow probably 20:59:34 russellb, thanks 20:59:37 markmc: I'd vote for next meeting, but not sure which portion of the topic exactly we want to talk about :) 20:59:39 np 20:59:42 gating reflects our priorities, it shouldn't drive them. 20:59:43 thread it is :) 20:59:43 * markmc moves on quickly 20:59:50 jeblair: +1 20:59:53 #topic Other governance changes 20:59:54 1) Remove gantt from Compute Program 20:59:55 #link https://review.openstack.org/79519 20:59:55 2) Add os-cloud-config to tripleo 20:59:55 #link https://review.openstack.org/79229 20:59:55 3) Add some REST API post-graduation requirements 20:59:55 #link https://review.openstack.org/68258 21:00:00 #topic Open discussion 21:00:03 30 seconds ? 21:00:08 haha 21:00:10 anything pressing ? 21:00:14 oslo 21:00:24 ? 21:00:37 ... 21:00:38 if you change things out of oslo to fix a bug please please please push a fix back to oslo 21:00:47 or make sure all theother projects are aware there's an issue 21:01:01 Cinder spend signficant time yesterday on a known issue in the log.py module 21:01:03 yeah, I've had a couple of cases this past week where folks had critical issues I didn't know about 21:01:04 spent 21:01:07 that should be -2'd by reviewers IMO... 21:01:11 jgriffith, "change things out of oslo" == commit to $project/openstack/common ? 21:01:14 russellb: it was in Nova 21:01:17 markmc: yeah 21:01:21 >_< 21:01:21 markmc: yes! 21:01:27 bah, shouldn't happen 21:01:29 which in Cinder we promptly reject 21:01:32 yeah that was a review fail 21:01:37 the default for a logging format string 21:01:38 normally rejected 21:01:41 ok, so we're into next meeting :) 21:01:41 but it seems other projects don't follow that mantra 21:01:47 ok, outta time 21:01:51 nova does (usually) 21:01:51 jgriffith, post to the list ? 21:01:52 yeah, this is a good topic for the project meeting 21:01:53 thanks 21:01:57 #endmeeting