21:02:22 #startmeeting project 21:02:23 Meeting started Tue Oct 21 21:02:22 2014 UTC and is due to finish in 60 minutes. The chair is ttx. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:02:25 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:02:26 jeblair: yeah I wanted maple too 21:02:27 The meeting name has been set to 'project' 21:02:32 Our agenda for today: 21:02:34 o/ 21:02:40 o/ 21:02:50 (maple works because it's an element in the state flag) 21:02:57 #link http://wiki.openstack.org/Meetings/ProjectMeeting 21:03:10 We didn't have 1:1 syncs today (and won't have them next week either) 21:03:18 Those will be back after the summit. 21:03:27 #topic M Design summit time span 21:03:36 First same question as I just asked TC 21:03:44 The M summit time span 21:03:51 That's the one after Vancouver, located in APAC, one year from now 21:03:58 The main conference will happen Tuesday - Thursday 21:04:05 Monday will have Ops Summit, analyst day, and other pre-summit things 21:04:14 Foundation staff needs to close on the contract and asks when should design summit be ? 21:04:25 We can do Tuesday - Friday, but then we fully overlap with the conference, which may impact the PTLs ability to present at the conference 21:04:43 We can do Wednesday - Saturday, but then... Saturday. Long week. TC didn't like that option that much 21:04:45 it's still the best option 21:04:50 We can do Monday, Wednesday-Friday (break on Tuesday), too. But feels weird. 21:04:52 -1 for Wed-Sat 21:05:06 agreed -1 for wed-sat 21:05:08 asalkeld: yes, Tc concluded Tue-Fri was probably best 21:05:09 ttx ha ha ha (state flag) 21:05:18 bleeding into the Sat will make it a long trip from Europe or the US 21:05:47 standardizing on tue-fri seems like a good system 21:05:58 ttx, another option is the dev sessions are m-f but just some mornings 21:05:58 I think Tues-Fri works fine 21:06:00 +1 for tue-fri 21:06:01 so +1 that we just live with the overlap and go with tue-fri 21:06:09 so have some half days 21:06:20 asalkeld: interesting 21:06:23 -1 for wed-sat 21:06:46 asalkeld: I fear we would end up doing a 5-day design summit 21:06:49 and IMO the tue-fri works good 21:06:55 asalkeld: I think the counter arg to that is monday is usually the operators day 21:07:00 also what ttx said 21:07:09 i.e. we'd fill the "holes" with more design summit pods or off-discussions 21:07:17 and be dead on friday 21:07:20 agreed on the hole filling 21:07:28 ok, just an idea 21:07:36 nature abhors void :) 21:07:46 you could lock the pods;) 21:07:51 ttx, ++ for hole filling 21:08:06 I'll bring the feedback back to Lauren and Claire. not saturday, slight preference for Tue-Fri 21:08:23 +1 for hole filling as well. 21:08:32 ok, moving on 21:08:38 #topic Juno release postmortem 21:08:46 So... the release went generally well. 21:08:54 There were a bit more late respins than usual, with 5 projects doing a RC3 in the last days 21:09:07 There was also a bit of a f*up in Glance, which started identifying release-critical bugs only after their RC1 drop 21:09:20 but this did not seriously affect the release 21:09:33 One interesting exercise is to look back at the "critical" bugs which justified the late respins, and ask why those were not detected by testing 21:09:50 If the issue is critical enough to justify a late respin, it usually should have been tested in gate in the first place 21:10:07 So those may just uncover significant gaps in testing... for example: 21:10:15 we are moving the functional tests in-tree (in Heat) - I am hopefully this will help our coverage 21:10:16 Cinder CHAP Authentication in LVM iSCSI driver: https://review.openstack.org/#/c/128507/ 21:10:27 Cinder Unexpected cinder volumes export: https://review.openstack.org/#/c/128483/ 21:10:37 Cinder re-attach a volume in VMWare env: https://review.openstack.org/#/c/128431/ 21:10:47 Ceilometer recording failure for system pollster: https://review.openstack.org/#/c/128249/ 21:10:57 Trove restart_required field behavior: https://review.openstack.org/#/c/128352/ 21:11:07 Trove postgresql missing cluster_config argument: https://review.openstack.org/#/c/128360/ 21:11:08 Yes, this helped uncover some 3rd party testing holes for Trove, and we're looking to address those in Kilo. 21:11:21 So in general, please have a look at your late proposed/juno backports and see if anything could have been done to detect that earlier 21:11:31 fair point 21:11:34 ok 21:11:38 how many of those were 3rd party tested? 21:12:01 ttx: Sounds good 21:12:04 ttx, in sahara we're already working on adding a bunch of tests to cover it (doing it after each release) 21:12:15 jogo: maybe the reattac 21:12:18 ar 21:12:27 good point, we've set up a plan for kilo to avoid this 21:12:51 ttx: as it sounds like we have a 3rd party CI quality control issue 21:12:51 the reattach volume in VMware env would belong in 3rd party, but the others were pretty much mainline tests imho 21:13:03 ttx: ahh, reattach 21:13:19 Another theme which emerged are issues with default configuration files -- although I'm not sure how we can avoid those: 21:13:28 Ceilometer missing oslo.db in config generator: https://review.openstack.org/#/c/127962/ 21:13:34 Glance not using identity_uri yet: https://review.openstack.org/#/c/127590/ 21:13:53 ttx: maybe devstack tests? or something 21:14:01 testing the config files 21:14:03 do these project not have a sample config anymore? 21:14:16 jogo: in the glance case they were just using the deprecated options 21:14:18 asalkeld: now generated (as opposed to static) 21:14:24 i still like the review capablity that gives you 21:14:33 ttx: oslo-config has a error on deprecated config option 21:14:34 (of having it in the tree) 21:14:40 but I last I checked it broke 21:14:46 jogo: ? 21:15:04 dhellmann: cause service to halt if a deprecated config option is used 21:15:15 asalkeld: yeah, definitely pros and cons to having it as static content in-tree 21:15:24 jogo: I don't know about that one, is there a bug? 21:15:32 dhellmann: I think so, let me lok 21:15:34 not sure how much of the "default config" we actually consume in tests 21:16:11 anyway, that's the only "themes" I could see in the late issues in RCs 21:16:29 dhellmann: https://bugs.launchpad.net/oslo-incubator/+bug/1218609 21:16:31 Launchpad bug 1218609 in oslo-incubator "Although CONF.fatal_deprecations=True raises DeprecatedConfig it is not fatal" [Low,Triaged] 21:16:50 BTW how did the gate hold up during the RC period? 21:16:57 ... flow rate seemed better behaved than during the milestones 21:16:58 Is there a significant issue that we just missed completely and is an embarassment in the release ? 21:17:04 ... clearly the patch proposal rate was way down 21:17:30 jogo: that bug makes it sound like apps are not correctly dying, but your comment earlier made it sound like they were dying when they should not 21:17:40 right 21:17:41 I'm not aware of any really critical issue that we let pass in the release 21:18:08 but then, I'm no longer spending my days on Launchpad reports 21:18:29 anything you know about ? 21:18:46 nothing major 21:18:50 eglynn: The gate seemed to hold up fairly well — didn't notice any significant delays during the RC period. 21:18:59 dhellmann: yeah IMHO the fatal_deprecation should make things die 21:19:01 and they don't 21:19:03 nothing that I'm aware of either. 21:19:09 jogo: is the app catching that exception? 21:19:22 jogo: or are we just logging and not throwing an error? 21:19:34 ttx: nope ... we have a known issue release noted, but not critical 21:19:43 SlickNik: agreed 21:19:52 Anything else you want to report on the release process ? Something we did and we shouldn't have done ? Something we didn't do and should have done ? 21:20:02 jogo: looks like the traceback in the log in that bug is showing the exception being caught by the app 21:20:17 dhellmann: jogo: we should probably take that back to oslo channel :) 21:20:23 dims: yeah 21:20:36 a question I've always wanted to ask about the release process 21:20:47 ttx to my newbie eyes it seemed to work well 21:20:52 * ttx braces for the shock 21:20:53 ... do the milestones have to be synchronized across all projects? 21:21:08 dims: yup 21:21:11 * eglynn asks in terms of mitigating the gate rush that kills us at every milestone 21:21:12 so that would be a development cycle question 21:21:33 The idea behing it is to have a common rhythm 21:21:52 but then if we can't handle the load this community management artifact generates... 21:22:06 we could certainly get rid of it 21:22:25 eglynn to spread them you would have to release one every 2 to 3 days 21:22:25 lots of projects 21:22:35 It's important for virtual/global communities to have the same rites and rhythms, it's what makes us one 21:22:50 Also useful for things like dep freezes, and common freezes in general. 21:22:51 but it's always a tradeoff 21:22:51 yeah just probing as to how "embedded" the synchronized milestone concept is 21:22:56 (... in the release management culture) 21:23:01 and if the drawbacks outweigh the benefits... 21:23:23 frankly, it's only feature freeze which generates load issues 21:23:34 the first two milestones are pretty calm 21:23:39 ttx needs a holiday some time too 21:23:58 We could stagger FF, but I'm not sure that would cure the load 21:24:12 yeah, maybe better to mitigate that FF rush by more strictly enforcing the FPF 21:24:19 that would certainly increase load on release management :) 21:24:26 eglynn: ++ 21:24:42 anyway, let's move on 21:24:49 #topic Design Summit scheduling 21:24:51 cool enough, looks like on balance it best to keep 21:24:57 At this point we only have Keystone schedule up on kilodesignsummit.sched.org 21:25:12 ironic schedule is basically done, i jus thaven't had time to post it yet 21:25:18 ttx: what's the deadline for that? 21:25:22 ttx: nova is basically done too 21:25:26 we are going through Heat's tomorrow this time 21:25:28 The deadline for pushing a near-final agenda is Tuesday next week (Oct 28) 21:25:34 So you should abuse your weekly meetings this week to come up with a set of sessions 21:26:01 we've a virtual/online mini-summit for glance, summit session finalizing would be done in there as well (Thurs 23/Fri 24) 21:26:15 i need to check on the Ops summit details one of keystone's sessions might change. 21:26:21 As far as the release management track goes, we don't have a specific meeting, so I'll discuss it here and now 21:26:27 we only have 2 sessions, and only two themes proposed 21:26:29 ttx: Neutron is almost done, we'll finalize tomorrow. 21:26:35 So we'll likely have one session on stable branch maintenance, and one on vulnerability management 21:26:44 No session on release schedule, since we decided that already on-list 21:26:59 (I know, we lose a traditional design summit slot) 21:27:08 Everything else will get covered at the Infra/QA/RelMgt meetup on Friday. 21:27:24 I'll push the agenda for that tomorow probably 21:27:36 Questions on the scheduling ? 21:28:00 any idea about per service operator session requests? 21:28:02 morganfainberg: any issue wrangling the scheduling website ? 21:28:14 ttx, i've had no issues 21:28:28 it's "just worked" for the most part 21:28:35 david-lyle: they are definitely a good thing to have. Avoid overlap with Ops Summit session to maximize attendance 21:28:50 I have not seen any requests, Horizon found it valuable last time, so just schedule and hope they come? 21:29:27 david-lyle: yes. Amybe brag about that session at the Ops Summit on Monday ? 21:30:00 there is a "Ops Summit: How to get involved in Kilo " session that sounds appropriate 21:30:15 when / how will the cross project session be descided 21:30:42 by the TC I think 21:31:01 asalkeld: TC members are voting on the etherpad this week, feel free to add your opinion there as well 21:31:18 then Russellb and MarkMCClain were mandated to build the final schedule 21:31:19 yeah, i have added some things there 21:31:30 #link https://etherpad.openstack.org/p/kilo-crossproject-summit-topics 21:31:51 shall be all set when we meet again next week 21:32:03 anything else on that topic ? 21:32:11 all good 21:32:18 #topic Global Requirements, a practical case 21:32:33 At a previous meeting we had a discussion on global-requirements, and agreed that it was for integrated projects requirements and integrated-project-wanabee solving checken-and-egg issues 21:32:44 But dims raised the following review: https://review.openstack.org/#/c/128746/ 21:32:54 Sounds like a good practical example of a corner case 21:33:05 nova-docker driver was split out to stackforge but still wants to do nova-like testing to be able to merge back 21:33:08 here's a long writeup - https://etherpad.openstack.org/p/managing-reqs-for-projects-to-be-integrated 21:33:14 Is that a valid requirements update case or not ? 21:33:57 So... 21:34:03 isn't the idea that they would sync on their own, not gate on requirements, and then seek to add any new requirements to the global list when they are re-integrated? 21:34:03 ironic has the same problem right? 21:34:05 essentially we need a way to allow requirements jobs and dsvm jobs to work 21:34:13 Or is python-ironicclient in global reqs? 21:34:14 seems to me that docker is super important to openstack 21:34:15 mikal: ? 21:34:19 dims: no, I think you want to turn off the requirements gating for your repo 21:34:27 dhellmann: dsvm jobs? 21:34:34 dims: do those fail, too? 21:34:38 dhellmann: yep 21:34:52 dims: what's the failure condition? can't install something? 21:34:55 devananda: I'm trying to work out why this didn't come up for ironic 21:35:08 dhellmann: requirements/update.py just exits 21:35:44 mikal: I'm not sure what the problem is (poorly tracing this meeting, sorry) 21:35:49 dims: does solum run dsvm jobs? 21:35:59 devananda: non-integrated project with extra requirements 21:36:01 solum had this issue - solution: remove the project from project.txt 21:36:05 ironic has chosen not to list our 3rd party libs in requirements 21:36:21 asalkeld: does that fix the dsvm issue? 21:36:25 which leaves it up to operators/installers to pull those packages separately from requirements.txt 21:36:41 dhellmann, it then uses upstream pypi 21:36:48 and within each driver's __init__, it checks and gracefully fails if its lib isn't present 21:36:54 not the openstack restricted one 21:37:08 devananda: yeah, this is what drivers in nova do too 21:37:12 asalkeld: our ci mirror is fulll, so I think you're actually using the local mirror now 21:37:26 devananda: yeah, we do something like that in oslo.messaging, too 21:37:33 so far it has been fine for us 21:37:33 * dims notes that docker-py is an essential not optional dependency 21:37:42 dhellmann, yeah - the logic could have changed 21:37:46 dims: essential for that driver. not for the whole project. right? 21:37:52 devananda: right 21:37:52 dims: so is python-ironicclient if you want a working ironic in nova though 21:37:59 dims: that's fine, if you uncouple your project from the global requirements list fully it ought to work 21:38:00 dims: so it fals in the same category 21:38:19 dims: that makes it not a requirement, since what driver you use is a deploy-time option 21:38:34 it's an external dependency of that particular configuration 21:38:35 devananda, well if docker-py was in requirements the heat could maybe put the container resource from contrib into heat proper 21:38:35 dhellmann: unforunately the tempest-dsvm jobs fail 21:38:53 devananda: not asking nova's requirement to have docker-py 21:39:01 devananda: asking global requirements to have docker-py 21:39:07 which is different 21:39:07 dims: ahh ok 21:39:08 in this particular case i am suprised we dont' just add it 21:39:12 dims: ok, that's disconcerting, they shouldn't care about the requirements now. Do you have a log? 21:39:42 dims: i have no objection to projects that want to depend on docker syncing which version of docker-py they depend on 21:39:50 dhellmann: there's a customer http client which is not good in nova-docker trying to use a well thoughtout library 21:39:51 that's the function of global req's -- syncing version deps 21:39:56 asalkeld: we could, but this is supposed to be working already, I think, and we will have other cases where that's not the right solution 21:40:18 I don't see a reason to reject a submission to global req's when projects want to use that to sync the version dep. but maybe I'm missing something? 21:40:18 sure 21:40:31 dhellmann: there was an aborted attempt in february to switch to docker-py documented in the url above 21:41:00 devananda: I may be misrepresenting sdague, but AIUI, he also wants that list to be an actual list of dependencies for openstack 21:41:09 of course that may change under a big-tent model 21:41:38 dhellmann: i've also documented a proposal to avoid adding to g-r using a flag sdague introduced 21:41:47 in update.py 21:42:06 ok 21:42:34 43-49 lines 21:42:36 I'm a little lost because some of those comments seem unrelated and I'm only just now seeing this issue. 21:43:06 apologies 21:43:31 where should I start reading to catch up? is there a ML thread, or bug or something? 21:43:45 dhellmann: https://etherpad.openstack.org/p/managing-reqs-for-projects-to-be-integrated 21:43:54 can we go off line on this? 21:44:18 * dims nods 21:44:27 yeah, I think the issue there is the dsvm jobs should not be failing on extra requirements, but let's talk about it on the ML 21:44:45 dhellmann: ack, will get the ball rolling 21:44:59 ok, so the take here is that they shouldn't need the global-requirements update ? 21:45:03 dhellmann: I think it should only fail if the project is tracked in g-r 21:45:24 at least that was my understanding 21:45:32 yes 21:45:53 mtreinish: yeah, so it may be a configuration issue then 21:45:55 my only "requirement" is project be allowed to move forward till it gradutes :) 21:45:57 for the job 21:46:07 * dhellmann snorts at dims' pun 21:46:17 :) 21:46:25 #topic Open discussion 21:46:28 Anything else, anyone ? 21:46:55 ttx do we have nice big white board in paris? 21:47:04 for the dev sessions 21:47:09 o/ 21:47:18 and esp. the friday session 21:47:28 yeah good point 21:47:41 a/j openstack-meeting-3 21:47:50 * eglynn hates little flip charts 21:47:51 ttx: I'm not particularly happy with how http://lists.openstack.org/pipermail/openstack/2014-October/010059.html was handled. who do I talk to about it? 21:49:07 asalkeld: the whiteboards are not "big" but there are several of them 21:49:17 ok thanks ttx 21:49:24 that's something 21:49:26 eglynn: the Flip charts double as whiteboards 21:49:32 notmyname: reading 21:49:43 * nikhil_k find Glance in the email topic 21:50:08 notmyname: that would be the OSSG group. They are having a pTL election this week 21:50:10 ttx: probably not enough time to read/digest in this meeting. but I'd like to address it 21:50:22 notmyname: so whovever wins shall get a nice email from you 21:50:27 notmyname: was the swift team not involved or something? what's the background? 21:50:33 you can cc me. I don't technically oversee that grouyp though 21:51:01 notmyname: alternatively you can follow up on the openstack-security ml 21:51:30 notmyname: i gather the people who drafted and approved that text all gather there 21:51:41 ok, the -security ML seems like a good starting place (rather than a response to the general ML for now) 21:52:30 dhellmann: only a little. and mostly I don't think the right issue was addressed 21:52:50 * dhellmann nods 21:53:21 anything else before we close ? 21:53:25 that, and that a resolution of "just use ceph instead of swift" was given as official openstack recommendation is annoying 21:53:51 * ttx reads again 21:54:16 that was one of several options, right? and it seemed like the most heavy-weight 21:54:23 That was indeed tactful 21:54:45 dhellmann: "Implementing an alternative back end (such as Ceph) will also remove the issue" feels a bit loaded 21:54:57 yeah 21:55:17 notmyname: yes, openstack-security sounds like the right avenue to discuss that 21:55:27 especially since I think the issue is simply the different definition of "public" between glance and swift. not a security issue 21:56:01 thanks. I'll follow uo on the -security ML 21:56:11 ok then, let's close this 21:56:15 #endmeeting