16:00:12 #startmeeting Solum Team Meeting 16:00:13 Meeting started Tue Sep 23 16:00:12 2014 UTC and is due to finish in 60 minutes. The chair is adrian_otto. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:17 The meeting name has been set to 'solum_team_meeting' 16:00:19 #link https://wiki.openstack.org/wiki/Meetings/Solum#Agenda_for_2014-09-23_1600_UTC Our Agenda 16:00:26 #topic Roll Call 16:00:28 Adrian Otto 16:00:33 Roshan Agrawal 16:00:35 Ed Cranford 16:00:36 Julien Vey (have to leave at :30) 16:00:40 Melissa Kam 16:00:47 Devdatta Kulkarni 16:00:51 julienvey: Acknowledged, thanks. 16:00:54 Gilbert Pilz 16:01:14 Noorul Islam 16:01:23 Pierre Padrixe 16:02:14 Welcome everyone 16:02:16 murali allada 16:02:26 Ravi Sankar Penta 16:02:44 #topic Announcements 16:02:55 would any members of the team like to make announcements today? 16:03:48 #topic Review Action Items 16:04:07 dimtruck (with help from PaulCzar) will investigate using wsme 16:04:24 this is still in progress 16:04:32 I'm not sure exactly what that is about 16:04:39 oh context - sorry 16:05:13 during our testing we found an issue with wsgiref where it creates a thread that doesn't complete on responses that aren't mapped in wsgi 16:05:32 in our example, if we don't have a method in pecan for POST /, PUT /, DELETE / 16:05:34 I am looking for the bug. I think we have added it 16:05:36 do we need a ticket for that in the bug system, or is tracking it as an #action here suitable? 16:05:46 then you can simply curl to it and DOS our application 16:05:52 there's a bug out there already 16:05:58 we can add a bug in an Incomplete state if we don't know ywt how to reproduce, etc. 16:06:13 ok, let's reference that here with a #link 16:06:21 it's reproduceable...one sec. getting it 16:06:29 #link https://bugs.launchpad.net/solum/+bug/1367473 16:06:30 Launchpad bug 1367473 in solum "PATCH requests are not supported in documented apis" [Undecided,New] 16:06:32 dimtruck: so it's more pecan related right? 16:06:44 sorry that is the wrong link 16:06:46 well, it's more wsgiref simple_server 16:06:56 not really pecan per se 16:07:07 ok 16:07:15 in the bug i listed a link where the suggestion is to not use wsgiref in production applications 16:07:35 #link https://bugs.launchpad.net/solum/+bug/1367470 16:07:38 ok, Iaqnd I see a review posted against that bug 16:07:40 Launchpad bug 1367470 in solum "Solum api hangs on non GET root requests" [Undecided,New] 16:08:05 that's the one! 16:08:24 the correct bug is 1367470 16:08:26 aha, I see, thanks for bringing me up to speed 16:08:58 so it looks like we can drop this as an action item for next week, and consider this one complete, since it is tracked elsewhere. Agreed? 16:09:13 sure 16:09:28 agreeed 16:09:33 we can always look at it during the BP/Task/Bug section if we want cross team discussion 16:09:55 ok, cool, let's look at the next AI 16:09:57 ravips will investigate f20 gate for failing barbican tests and come back with a suggestion for whether to make f20 non-voting 16:10:11 sure, I did some experiments on F20 yesterday 16:10:12 #link https://review.openstack.org/#/c/122782/ Patch for f20 16:10:38 some background on the problem: plan create involving private repo is failing on f20 during barbican secret deletion 16:10:57 devstack + barbican => No issues (tested create/delete secret using python barbican client) 16:11:15 devstack + barbican + solum => able to reproduce the exact issue (stacktrace: http://paste.openstack.org/show/114446/) 16:11:32 I need to debug further to narrow down the issue 16:12:17 ok, we have seen intermittent issues over a relatively long history with gate tests failing on delete actions 16:12:35 but it does not always happen, so we might have a lurking bug somewhere 16:13:03 someting we use in solum is trigged a bug in barbican 16:13:26 I will update once i have more details 16:13:33 ravips: do you feel equipped to continue with the troubleshooting, or do you need help from another team member or members? 16:14:10 i started this yesterday, I think I should be able to narrow down the issue today 16:14:28 ok, thanks ravips 16:14:42 that concludes our action item review 16:14:46 #topic Blueprint/Task Review 16:14:54 check_uptodate.sh handling (devkulkarni) 16:15:03 #link https://bugs.launchpad.net/solum/+bug/1372959 16:15:07 Launchpad bug 1372959 in solum "check_uptodate handling" [Undecided,New] 16:15:12 please take a look at the bug description 16:15:43 the gist is that can we do something about check_uptodate so that we don't get -1 votes for things that might have changed upstream 16:16:36 I'm not sure why we even have a static solum.conf if we can generate the default 16:16:50 ^^ 16:17:21 ok, so if we had a separate gate test for the config test, and it were a nonvoting job that might work well enough 16:17:36 so taht we get a clue when the configuration goes stale, but it does not halt all work 16:18:39 sure.. I don't have a preference as long as work can continue 16:18:48 yeah that works as well 16:18:50 would it be right to compare that to our "update from global requirements" reviews at least in tone? 16:18:56 gpilz: It's traditional for Linux software to include a .conf file with the various options listed 16:19:30 datsun180b: sure.. where are you going with that? 16:19:44 just to get a feel for severity/priority 16:19:48 adrian: I understand the need to include a .conf file 16:19:50 are you saying we treat the failing non-voting check_uptodate check as the trigger to go generate one 16:20:10 but, if we have the ability to generate a conf file with the default settings from our code, why not just go with that? 16:20:14 devkulkarni1: yes 16:20:39 gpilz: I think that should be part of the gate test, actually 16:20:47 the generation of the file 16:21:36 adrian_otto: but we are not generating the thing to compare to by hand so what does the gate test achieve? 16:21:54 btw, generate_stample_conf on vagrant doesn't match with what the gate is expecting, 'host' on vagrant is returning solum and gate expects localhost..anyone experienced this case? 16:22:45 right, our sample conf gets modified during setup and we'd probably do to stem that if we can help it 16:22:58 devkulkarni1: we are detecting the case where configuration settings change in other projects, right? 16:23:12 simply spinning up the vagrant environment and doing nothing else is enough to change the sample conf 16:23:13 adrian_otto: yes 16:23:14 ravips: good to know.. I think we need to change vagrant env so that it is consistent with the gate 16:23:49 devkulkarni1: +1 16:24:30 adrian_otto: so we will move check_update script to some existing non-voting job or we going to create a new one? 16:25:08 i say new job, and take the sqlalchemy check with it so our pep8 env only runs flake8 16:25:21 adrian_otto: yes, that is the purpose of the gate test. what I was saying was, do we want to use gate to tell us that our conf is stale vs 16:25:25 sorry, alembic branches 16:25:40 say, adding documentation to our release notes that tells the operator to generate the file 16:26:14 we should just add it to the setup_install.py code 16:26:24 ravips: we should create a new tox environment and new job.. let the conf checking be its own thing 16:26:44 if we want to check it at all in the first place 16:27:06 maybe we don't need to check it if we always auto-generate it at install time 16:27:21 +1 16:27:32 so you just get a current solum.conf file each time you install Solum 16:27:34 adding to non-voting job may not be effective, I don't know how many of us will look at the failing tests for the non-voting jobs 16:27:48 all we are really checking for is the ability of contributors to properly cut & paste 16:27:54 ravips: you make a valid point :) 16:28:05 I like adding doc to our release notes 16:28:27 or at least a blocker bug every release to generate the conf file 16:28:43 ravips: so that you have to RTFM to find out how to make a config file? That seems awkward to me. 16:28:59 oh, so that I do it when I tag releases? 16:29:17 yes 16:29:33 is there a way to create such bugs? 16:29:47 that occurred to me, but I thought that might not be as good between releases when the upstream changes happen 16:29:53 or is it just a mental note that the release manager has to keep 16:30:09 it might actually cause gate tests to fail for apparently unknown reasons, which is probably why we have this to begin with, right? 16:30:19 we can create a new tag, release-essential or some other better name 16:30:24 good point about upstream changes between releases 16:30:59 the static conf file is a way to check what the config was last time 16:31:14 and the generated content is a way to check what is expected now 16:31:58 but if we auto-generate config upon every install and every gate test run, then this would not matter 16:32:19 right (to the last sentence) 16:32:51 so we have a pre-test hook for every gate job, right? 16:33:02 what do you mean? 16:33:17 we can specify scripts to run before tests do, right? 16:33:24 oh! 16:33:40 don't know enough to comment on it 16:33:45 it does 16:33:46 we can 16:33:59 in devstack_gate 16:34:07 assuming that's possible, we can make the config generate script a pre-test hook script. 16:34:10 most of the time, new changes to conf file are not related to solum..we only care about the fields of other openstack projects that we use (like keystone). I don't think we need to run this script frequently..if something has changed in the upstream that affects us, our tests will catch it (assuming we have good coverage) 16:34:56 ravips: yes, but we might not know why it's failing 16:35:08 having hte automatred config test may save us a lot of research time 16:35:34 that's where a non-voting gate test might be handy 16:35:47 if the other func tests, fail, we can look to see if the config test failed 16:35:58 actually yeah.. even if there is a stale conf, a failing job will point us in the right direction 16:35:59 and if it did, we can resolve that first 16:37:08 and if no tests failed, then who cares until it causes a problem 16:37:15 ;-) 16:37:35 yeah, non-voting job might be useful in case of voting job failures 16:37:37 I am leaning towards keeping the check in a non-voting gate 16:37:58 devkulkarni1: +1 16:37:59 +1 16:38:09 any alternate points of view to consider? 16:38:22 agree, make it non-voting 16:38:28 yup 16:38:57 +1 16:39:01 #agreed to resolve bug 1372959 we will use a non-voting gate test for configuration file testing. 16:39:02 Launchpad bug 1372959 in solum "check_uptodate handling" [Undecided,New] https://launchpad.net/bugs/1372959 16:39:07 cool 16:39:35 devkulkarni1: please update the bug accordingly, referencing team meeting on 2014-09-23 16:39:44 yeah, will do that 16:40:05 ok, that brings us to our next sub-topic which is similar in nature 16:40:08 strategy to sync up openstack/common in solum with upstream (devkulkarni) 16:40:20 yes.. so this came up today with my discussion with stannie 16:40:20 so some backstory here 16:40:35 devkulkarni1: you can go first if you like 16:40:53 actually, I am myself interested in the story.. so you go first 16:41:05 as in listening to it.. don't know the context 16:41:08 ok, so Solum used to be listed in the openstack projects.txt 16:41:26 meaning that it was forced to use only requirements that other openstack projects were allowed to use 16:41:49 when we began using mistral's client and barbican client, that became problematic 16:42:06 because these were dependencies that other projects were not yet allowed to use 16:42:35 so the only solution that would unjam our project was to break that link with the check against global-requirements.txt and proceed with what we needed 16:43:23 so that leaves us without the convenience of the automated requirements bot that comes around and finds dependencies that are out of sync, and submits reviews to update them. aka: "reviewbot" or something? 16:43:59 what is the link between this and openstack/common (oslo) ? 16:44:04 so we would like an equivalent of that for our own use that disregards our unique list of exceptions from the global-requirements.txt list. 16:44:31 noorul: in all honesty I don't know. 16:44:50 noorul: exactly my question 16:45:00 I think there is something that can generate diffs,a nd can submit those as reviews as well, but I'm not completely sure 16:45:05 they are different 16:45:12 adrian_otto: I was referring to the python code that we have in solum/openstack/common/* 16:45:14 the question is when should we (what frequency) sync oslo openstack/common 16:45:46 we have that code since the beginning of the project 16:45:51 We can bring in a policy for this 16:45:57 I mean for oslo sync 16:46:13 how/when we sync that up? what is the repo from which it is copied? is there a better way than copying over all the code? 16:46:24 we didn't sync openstack.common for a long time which leaves us to have some bug not fixed etc 16:46:36 there is a script to sync the modules 16:46:38 ok, so look 16:46:39 long time? 16:46:56 I think Angus synced to recently 16:46:59 ok 16:47:00 what we can do on an immediate basis is to have one of us submit such a patch as a review to our project 16:47:04 stannie: exactly.. we will be in that situation as long as we are maintaining that code on our side as well 16:47:05 especially the DB part and oslotest 16:47:23 and I did sync python-solumclient 16:47:26 then we can decide if we want an automated updating thing (one may exist that we can leverage) 16:47:36 noorul: true.. but I don't think we have synced up everything 16:48:08 may be for solum some modules are left 16:48:21 noorul do you know what is the policy on other projects? 16:48:29 noorul: but the basic question I have is, is this the only approach for us? 16:48:30 stannie: not sure 16:48:36 I could check in the the Oslo team to ask what they recommend, and ask what we should be reading about if there is already written advice for this. 16:48:44 here either me or Angus used to sync 16:48:48 that to keep that code in our repo 16:49:30 if there is a way to shed that code, and just use a requirement on an Dslo release, that would be my strong preference. 16:49:37 s/Dslo/Oslo/ 16:49:58 +1 .. but I believe there might not be otherwise we would have already pursued that option 16:50:26 and that is why I want to hear from noorul or others if there are any technical roadblocks for us for not pursuing that opion 16:50:31 s/opion/option/ 16:50:31 ok, I am willing to take an action item to do some research and report back next week 16:50:57 sounds good adrian_otto 16:51:17 I think for core projects someone from oslo team syncs it 16:51:34 but for stackforge I think it is upto us 16:51:39 #action adrian_otto to investigate using alternatives to openstack/common in Solum, and report back to the team with options. 16:51:49 noorul: that is good to know.. but the question is, is keeping that code in each project's repo the only option? 16:51:53 noorul: makes sense 16:52:06 devkulkarni1: YEs 16:52:18 ok, we are running low on time 16:52:21 devkulkarni1: But some of them will be factored out to other packages 16:52:29 shoot 16:52:30 so let's touch on the last sub-topic before Open Discussion 16:52:34 I wanted to discuss https://review.openstack.org/#/c/117056/ 16:52:39 devkulkarni1: Like test module was factored out to oslotest 16:52:48 gpilz: hang on, we will revisit that 16:52:51 Etherpad for things to discuss at Paris summit (devkulkarni) 16:53:07 devkulkarni1: As and when things matures then will create new packages for different modules 16:53:09 so I added that last week.. do we see a need for such a etherpad? 16:53:15 devkulkarni1: Did we start one, or are you suggesting we start one? 16:53:26 I haven't started one. I am asking should we start one? 16:53:34 I should have mentioned at the Announcements section 16:53:39 I think it will be good to have 16:54:01 I applied for Solum to be in the Design Summit, and our application was accepted yesterday 16:54:08 so we will be on the official program 16:54:16 oh cool!! congratulations all 16:54:16 awesome! 16:54:18 nice! 16:54:28 good news! 16:54:31 woohoo! 16:54:31 good to hear 16:54:36 woot! 16:55:01 Wow 16:55:04 cool 16:55:32 how many of you will be there? 16:55:49 so although many of us will not be able to attend for travel budgeting reasons, we will hold design sessions with those who can attend. 16:56:07 I will attend. 16:56:20 I just got my travel approval 16:56:35 mine is still pending 16:56:44 Rackers will not know until later 16:56:56 but I will go even if Rackspace does not send me 16:57:54 #action adrian_otto to email a link to a Paris Summit topics etherpad to the ML 16:58:02 #topic Open Discussion 16:58:24 Gil, you asked about https://review.openstack.org/#/c/117056/ 16:58:30 oh so the topic in #solum mentions our next meeting is at the end of last june 16:59:09 yes - Devdatta has a −1 against it 16:59:18 adrian_otto: any updates on solum incubation request? 16:59:36 gpilz: yeah, I thought there were some concerns 16:59:49 will be reviewing it if those are resolved 16:59:50 datsun180b: thanks. Fixed. 17:00:19 devdatta: adrian assures me that those concerns have been addressed 17:00:24 ravips: OpenStack is having an identity crisis about the integrated release right now 17:00:31 so let's talk about it in Paris 17:00:37 or in #solum 17:00:43 okay 17:00:45 thanks everyone for attending today 17:00:47 gpilz: okay 17:00:49 #endmeeting