16:00:37 #startmeeting OpenStack-Ansible 16:00:37 Meeting started Thu Feb 25 16:00:37 2016 UTC and is due to finish in 60 minutes. The chair is odyssey4me. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:38 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:41 The meeting name has been set to 'openstack_ansible' 16:00:44 #topic Agenda and roll-call 16:00:50 o/ 16:00:53 \o/ 16:00:59 o/ 16:01:25 o/ 16:01:29 o/ 16:01:30 o/ 16:01:40 o/ 16:02:14 #link https://wiki.openstack.org/wiki/Meetings/openstack-ansible#Agenda_for_next_meeting 16:02:20 o/ 16:02:45 \o 16:04:40 hi 16:04:52 Hi everyone, welcome back after the week of absence due to the mid cycles. 16:05:06 There were no action items in our last meeting. 16:05:19 #topic Define core team expectations/Add non-Rackspace cores 16:05:27 * odyssey4me hands the mic to automagically :) 16:06:22 odyssey4me: Thanks! I added an agenda item about core expectations and the addition of non-Rackspace cores as I and others outside of Rackspace have a growing dependency on the fine work that you Rackers and the rest of the community have done here 16:07:07 So, wanted to raise the issue of documenting expectations for core contributors and assess the groups thinking on recruiting/adding Rackspace-external cores 16:07:46 Action items would be: wiki doc explaining core responsibilities and process for gaining/losing core status 16:07:50 Thoughts? 16:07:56 I like the idea of documenting expectations for cores and reviewers overall. I've also been wanting to document expectations of the PTL. We can perhaps also document what the launchpad drivers group should be doing. 16:08:10 o/ 16:08:17 I'd prefer it not to be wiki doc - I'd rather see it join the contributor guidelines in the docs. 16:08:19 automagically: I am all for people outside of Rackspace becoming core, but I do think you're right that the expectations need to be written down 16:08:35 I'd like the wiki to largely be a place that points at the docs we have. 16:08:50 odyssey4me: Location matters much less to me than getting it agreed upon and written down 16:09:43 There's definitely groups out there who have documented out their criteria already that could be adapted or used for a start 16:09:47 I've also been wanting to suggest two new cores to the team. I have discussed them with the existing core members and there is a general agreement of approval. 16:09:47 Any ideas about how to best recruit/retain/attract cores from outside Rackspace? 16:10:11 I’m thinking the Austin event could be a good time/place to recruit 16:10:18 Are we happy to discuss that now quickly, or would it be preferred that it's done through the ML (as is traditional). 16:10:23 spotz: Agreed 16:10:54 Believe you and I discussed https://wiki.openstack.org/wiki/Heat/CoreTeam as a useful example 16:10:55 odyssey4me: I'd caution that the cores probably want to discuss before nominating someone on the public ML who might not get approved 16:11:20 my personal opinion is that this project needs to be a bit more flexible and fluid when it comes to cores 16:11:24 in terms of recruiting cores - all cores need to start by being involved in the community first - we recruit new contributors at any events we are involved at 16:11:25 That's caused embarassment and hard feelings in the past 16:11:28 i don't think we're in a position to start setting down hard requirements 16:11:41 I'd be happy to have a session to discuss how better to grow our community at the summit 16:11:46 I'd like this to be a group effort 16:11:53 mattt: I don't think there should be hard requirements, but definitely a target for people who want to be core 16:12:10 palendae: +1 16:12:12 yeah, not hard requirements - just expectation setting 16:12:14 mattt: to help answer, "What should I do to get that status?" Be active in reviews, contribute code, etc 16:12:22 +1 for more cores, regardless of employer, we should also be working towards the diverse-affiliation tag 16:12:35 palendae: i think you summarised it very well right there :) 16:12:40 hughsaunders: How would we do so? 16:12:40 +1 palendae and hughsaunders 16:12:45 being core is not a status - it is a responsibility and a role of service to the community 16:12:47 re: diverse-affiliation 16:12:58 odyssey4me's point is accurate 16:12:59 hughsaunders +1 16:13:06 There is a bit more demand for a core reviewer 16:13:30 automagically: https://github.com/openstack/governance/blob/master/reference/tags/team_diverse-affiliation.rst 16:13:36 thx 16:13:57 Ah, so that was very much my goal in raising this agenda topic 16:14:11 Nice to see it formalized in the manner within the broader community 16:14:14 hughsaunders automagically or better: https://governance.openstack.org/reference/tags/index.html 16:15:21 So #agreed that core contributor expectations be added to the project doc 16:15:53 +1 16:15:58 can we move forward with bringing people on as cores or do we need further discussions, documentation, etc. in place first? 16:16:04 i'd really love to see us adding cores sooner rather than later 16:16:08 can someone do some research into prior art and propose something for review - then we can adjust to suit us? 16:16:12 mattt: +1 16:16:22 #action - update contributor guidelines to describe core responsibilities/expectations and membership process 16:16:49 odyssey4me: https://wiki.openstack.org/wiki/Nova/CoreTeam#Membership_Expectations 16:16:57 automagically you need to have the first word be the person assigned to do so :) 16:17:02 mattt: My belief is that it may be hard to accept a nomination without a good understanding of the level of responsibility 16:17:10 odyssey4me I think that link to Heat's the automagically posted is pretty good. I can look for the Doc team's 16:17:21 #link http://docs.openstack.org/developer/neutron/policies/core-reviewers.html 16:17:27 automagically: but i'm also not sure all our cores at the moment would meet any sort of level ... which is why i say we have to be more fluid 16:17:45 and as the project grows and our cores become more diverse we put down a proper framework for what this involves 16:17:48 mattt: I’m good with the expectations being lax given the reality of current cores 16:18:13 yeah, no worries - let's deal with the details in review 16:18:16 odyssey4me: Thx for the tip. So, who wants to own the action item on doc. spotz? you? 16:18:22 automagically can you put a suggested page together? 16:18:45 otherwise spotz :) 16:18:49 automagically #action automagically will submit patchset for review documenting core expectations 16:19:05 Whoops, thats a lot of automagicallys ;) 16:19:17 automagically poke if you need help 16:19:18 such magic so auto 16:19:23 alright, let's move on from that 16:19:26 Happy to pass the mic, think the conversation went in a very useful direction 16:19:38 #topic New core proposals 16:19:53 I'd like to propose both jmccrory and automagically as new cores for OSA. 16:20:15 I'd contest that this topic is the same as the last one. 16:20:17 They've both been regular committers, reviewers and been very helpful in identifying and fixing issues. 16:20:27 jmccrory: +1 16:20:30 automagically: +1 16:21:16 +1 on both 16:21:17 I think they'd both make fine additions to the core team. 16:21:23 hughsaunders last one was proceedure/policy this one is voting 16:22:13 Not sure if only cores get a vote, but no objections to either 16:22:32 +1 +1 16:23:03 i'm +1 on both also 16:23:45 d34dh0r53 and stevelle aren't active/present, but that's a majority 16:24:22 so, welcome to both of you to the core team - I'll do the formalities afterwards? 16:24:42 * automagically whoot 16:24:59 thanks 16:25:14 #topic Pinning pip and related dependencies 16:25:33 #link https://etherpad.openstack.org/p/openstack-ansible-pip-conundrum 16:25:51 * cloudnull high fives jmccrory and automagically 16:26:00 grats guys 16:26:04 jmccrory automagically welcome ! 16:26:20 OK, down to the business of improving repeatability. 16:26:20 Looking forward to continuing to contribute to such a great project 16:26:44 in recent days we hit two issues which uncovered a failing in our build methods 16:27:06 the genral idea we aim for is to ensure that whenever you build a tag, the result is the same 16:27:33 today we are failing in terms of the repo server build and everything related to python bits before that 16:27:56 in the solution options I've outlined some suggestion -but I'd like more discussion and ideas 16:28:01 questions, comments, etc 16:28:06 please add to the etherpad 16:28:23 The first solution in pip_install role seems like the cleanest 16:29:08 it covers pip, wheels and setuptools - but it's a hard set. I'd like a neater way to do this. 16:29:35 the first is probably something we need as a stop-gap, the second seems like a better long term view 16:30:26 something that requires as little maintenance as possible is essential 16:30:37 odyssey4me: +1 on the low maintenance aspect 16:30:58 cant we add this to the global-requirements.txt files in the main repo for the three branches 16:31:03 The second solution begs the question of how a user/deployer might override if needed 16:31:16 then we dont have to mess with the package versions at the role level 16:31:29 cloudnull the repo server install doesn't use any of the requirements set out anywhere 16:31:46 it only uses the bits in its own pip_packages list 16:32:01 we could change that 16:32:19 right. but if we set it in the global requirements pins it would get picked up everywhere as it pertains to an OSA isntall. 16:32:26 but maintaining requirements for the repo server which conflict with openstack requirements is also not ideal 16:32:53 cloudnull so the pinning is not actually an issue for any of the packages once the repo is built 16:33:01 the issue is in building the repo server itself 16:33:14 seems like the simiplest solution would be to add the lines here https://github.com/openstack/openstack-ansible/blob/master/global-requirement-pins.txt 16:33:15 it does not use global requirements, upper constraints, or anything like that 16:33:31 it does 16:33:43 cloudnull how so? 16:33:45 the lookup plugin py_pkgs indexes everything which instructrs the repo 16:34:05 that is in building the repo, not installing the repo server 16:34:08 cloudnull: installing packages in the repo_server prior to building stuff 16:34:28 ie. to be able to build wheels you need wheel installed 16:34:36 these https://github.com/openstack/openstack-ansible-repo_server/blob/master/defaults/main.yml#L83-L92 ? 16:34:44 all those packages at the minute are unconstrained 16:34:57 jmccrory: yes 16:35:23 ah i see now. 16:35:58 so my suggestion is to do an upper constraints style thing 16:36:15 we take the output of a successful build, and publish it on openstack infrastructure 16:36:26 the repo install then needs to use that as an upper constraint 16:37:09 i would much rather see it included in the osa repo than published elsewhere and pulled down. easier for operators to override and also makes life easier for operators doing offline deployments. 16:37:16 logan-: +1 16:37:20 logan-: +1 16:37:27 * cloudnull was just writing that 16:37:34 I think we definitely need a solution that allows override capabilities similar to what we have elsewhere 16:37:56 That said, I like the general approach of the upper bound constraints 16:38:06 then we have to regularly patch it to keep it fresh 16:38:22 I think thats just something we have to do . 16:38:29 my method can easily allow it to be optional -as was noted in the etherpad 16:38:39 WIth the force var? 16:38:56 so is this effort better than just pinning a couple of packages in teh repo server role? 16:38:59 I'd rather stay away from implementing caps - especially ones that duplicate those in openstack. 16:39:07 will it really result in playing whack-a-mole 16:39:15 Thinking through this a bit, as a deployer how would I test a new set of constraints 16:39:28 Just override the requirements file location var? 16:39:38 simple - set the var that ignores the upstream constraints 16:39:53 Right, but I’m talking about testing a new set of constraints 16:39:57 and the location would be a var too, so you could set differing constraints 16:40:42 mattt the trouble with pinning that package list is that it misses pinning the deps of those packages - that is what creates the whack-a-mole situation 16:40:44 Could we use a lookup with the requirements url, so if I didn’t want to publish my new set of requirements, I can just override the var with a list? 16:40:59 odyssey4me: in the last 2 days the issues have been with those packages specifically tho 16:41:04 I think we could add the same constraints in the repo server role as whats in the pip install role and be good. 16:41:05 ^ thinking out loud obviously there 16:41:08 I would rather we use our final complete pip repo version list as an upper constraint - it contains absolutely everything we use 16:41:25 we need to be careful of over-engineering this 16:41:29 because we've hit 2 problems in 2 days 16:41:33 cloudnull whack-a-mole again then for the next time we find a gap like this 16:41:34 i don't recall this biting us until then 16:41:54 I'd like to get the requirement / constraint files published too but idk tat it needs to be an integral part of the build process 16:43:10 and if we pin them in the pip install role then the items here should already be resolved https://github.com/openstack/openstack-ansible-repo_server/blob/master/defaults/main.yml#L83-L92 IE wheel, setuptools. 16:43:12 we've had a lot of feedback from the infra and pypa crew that pinning these packages is not a good idea. 16:43:29 but we also need to ensure that we build the same thing every time and can rely on things to work 16:43:39 but you're pinning them using a constraints file right 16:43:43 i don't see the difference 16:43:44 which is why I like the idea of updating requirements after successful builds 16:43:57 or perhaps doing a nightly that updates the thing 16:44:27 mattt the difference is that one gets updated manually by a review - the other is updated dynamically and automatically 16:44:37 as long as we're locking pip to a specific version i think we have to pin the packages and doing it in the pip install role makes the most sense 16:44:55 well 16:45:10 ok, if that's the way we think is best - that's fine 16:45:11 is there value outside of this specific problem to have our packages captured somewhere outside the env ? 16:45:26 if there is we should do this and we can use it when we install teh repo server 16:45:32 bear in mind that we then end up doing things differently from upstream 16:45:49 openstack tests all use the latest pip and the upper constraints for the rest 16:46:14 Are they using pip8 now? 16:46:18 palendae yes 16:46:22 yeah we're already doing something different now then 16:46:24 They must cause it breaks every time a new pip comes out 16:46:28 for stable/kilo and above 16:46:40 Yeah, makes sense 16:46:55 yes - which is why sigmavirus24 has already adviced us not to do what we're doing 16:46:59 looking at the global requirements the only named constraint they have is wheel 16:47:00 as have the infra crew 16:47:00 https://github.com/openstack/requirements/blob/master/upper-constraints.txt 16:47:05 https://github.com/openstack/requirements/blob/master/upper-constraints.txt#L376 16:47:23 cloudnull g-r has pip too 16:47:45 which is unpinned 16:47:46 https://github.com/openstack/requirements/blob/master/global-requirements.txt#L139 16:47:52 right 16:48:16 it looks as though we're leaning towards uncapping pip? 16:48:24 so i dont see the harm in pinning the other bits so long as we're allowing pip to move forward 16:48:37 *we're NOT allowing ... 16:48:44 sigmavirus24 no, the majority at this point want to cap pip, setuptools and wheel 16:48:45 cloudnull: but thats how we ended up with the wheel > pip problem 16:48:52 * sigmavirus24 shrugs 16:49:03 Have fun figuring out why things break when those are capped with upstream having them uncapped 16:49:14 sigmavirus24 I would like to uncap these things and use a published upper constraint that's updated by the build process dynamically 16:49:16 hughsaunders: idk if this is a problem if we're using the latest pip 8 -- sigmavirus24? 16:49:30 cloudnull: if what specifically is a problem? 16:49:40 if anything, I'd advocate blacklisting known bad versions 16:49:46 sigmavirus24: well they break when you leave them uncapped also 16:49:48 so what to do 16:49:52 (especially since the pip team is responsive to openstack breakage) 16:49:57 sigmavirus24 which is already done upstream 16:50:21 mattt: every version of pip has broken us? 16:50:37 sigmavirus24: no, but setuptools wasn't constrained and that broke a bunch of things yesterday 16:51:03 i don't see the issue if you pin the three in tandem 16:51:38 hand by tandem i mean the latest working version of all three at a point in time 16:51:38 mattt: we can constrain the world in conflict with upstream openstack. we could help a resource starved project (setuptools). we could use an upper-constraints-like system as odyssey4me suggested 16:51:50 So the issue I have with capping is simple - openstack's testing all uses the current versions. There are no caps. If we do not follow that model, then we assume full responsibility of testing everything that the rest of openstack-ci has tested. 16:52:08 mattt: the thing is that bounding them all in tandem is kind of silly since they're all independent pieces 16:52:25 pip needs a modernish version of setuptools which doesn't have to be the latest 16:52:40 we need a version of wheel that will generate wheel names that our version of pip can install 16:53:00 right because we pinned pip without doing the same to wheel and setuptools 16:53:03 which is why we hit that issue 16:53:05 they're related, absolutely, but not deeply tied together by any stretch of the imagination. Is the problem with setuptools tracked on their issue tracker? Have we tried fixing things? 16:53:09 wheel was updated and wasn't compatible w/ pip 16:53:12 sigmavirus24: what would be the best solutuin knowing that we have locked ourselves to pip7.x ? 16:53:23 sigmavirus24 a patch is in progress 16:53:41 cloudnull: we can cap the world, but it's a smell given that openstack itself isn't doing this 16:53:48 I would advocate for blacklists 16:53:54 cloudnull we should not be locking ourselves to pip 7.x is the point 16:53:55 pip!=8.0.0,!=8.0.1 16:53:58 so why did we pin pip to begin with and what could we have done to avoid that? 16:54:04 because that decision is the source of this problem here 16:54:13 odyssey4me: we currently are though . 16:54:14 mattt: 8.0.0 broke argparse 16:54:29 yes, but it raised a broader problem which is that what we deply today is not what we deploy tomorrow 16:54:30 mattt: so the thing is that 8.0.0 wanted to remove support for uninstalling distutils installed packages 16:54:44 this is why I'd like to implement something to close the gap 16:55:04 that broke installing things with pip that might have installations pre-existing based on standard library or system packaging 16:55:08 simply blacklisting does not solve that issue 16:55:22 which would have to be an upper cap on setuptools and wheel so long as we have this https://github.com/openstack/openstack-ansible-pip_install/blob/master/defaults/main.yml#L27 16:55:25 odyssey4me: the issue of what we deploy today is not what we deploy tomorrow for the same version? 16:55:47 publishing the full pip requirements file from our repo per build will result in each tag having that set in stone 16:55:58 odyssey4me: in that case, why not have vars that are generated when we create a tag that represent the versions of everything when that tag was created 16:56:02 this is the best way to ensure that each tag deployed will always result in the same thing 16:56:12 odyssey4me: if you want to take that stance it has to be applied right through openstack-ansible, not just when it comes to deploying a specific container 16:56:12 odyssey4me: capping will? 16:56:22 sigmavirus24: Hmm, interesting middle ground on the vars generation 16:56:30 sigmavirus24 I'm suggesting something similar to what upper-constraints does for devstack 16:56:42 odyssey4me: I think we have similar ideas 16:56:52 if we can automate vars generation on every sha jump, tha'td do fine too 16:57:02 I'm saying that we should let the gate do whatever. Only when we tag the release should we update those constraints. 16:57:29 See I also disagree with our sha bumps being a thing on stable branches that happens *after a tag* but that's just me 16:57:42 I know I'm the only person who thinks we should trust the small team of dedicated upstream stable maintainers 16:57:58 we're almost out of time 16:58:02 (This is also why I've kept out of these discussions) 16:58:09 so we need to close off and continue in #openstack-ansible 16:58:51 cheers everyone ! 16:58:52 Thank you all for your time and participation. 16:59:43 #endmeeting#endmeeting