19:01:58 <jeblair> #startmeeting infra
19:01:59 <openstack> Meeting started Tue Feb 17 19:01:58 2015 UTC and is due to finish in 60 minutes.  The chair is jeblair. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:02:00 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:02:02 <openstack> The meeting name has been set to 'infra'
19:02:03 <jeblair> #link agenda https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:02:07 <jeblair> #link previous meeting http://eavesdrop.openstack.org/meetings/infra/2015/infra.2015-02-10-19.01.html
19:02:19 <jesusaurus> o/
19:02:21 <jeblair> #topic Announcements
19:02:25 <krtaylor> o/
19:02:36 <mordred> o/
19:02:37 <jeblair> i have two things i'd like to start off with...
19:02:55 <jeblair> first, a recap of the meeting format we're currently using
19:03:07 <yolanda> o/
19:03:25 <jeblair> in general, what we're trying to do here is make sure that things that need discussion get time here
19:03:54 <jeblair> the agenda is always open for people to add topics that they'd like to talk about, get agreement on, brainstorm on, just let people know about, etc...
19:04:08 <jeblair> the other thing we do is identify some priority efforts
19:04:41 <jeblair> usually things that affect a large group; things we've identified (perhaps at summits) which affect the openstack project as a whole...
19:05:08 <jeblair> and we identify individual changes that might be blocking those efforts that need high-priority review
19:05:40 <jeblair> one thing that i don't want this meeting to become is a place where we list individual changes that need review
19:05:51 <zaro> o/
19:06:07 <jeblair> by my reckoning, there are usually a few hundred infra changes outstanding
19:06:27 <fungi> for large values of "a few"
19:06:56 <clarkb> ya I have been trying to dig myself out of that hole recently. Its hard to make progress
19:07:00 <jeblair> and part of the priority efforts is an attempt to deal with that by making sure that we don't sacrifice progress on efforts we feel are most important by getting swamped by the deluge
19:07:22 <jeblair> so anyway, please keep that in mind when you add things to the agenda
19:07:31 <jeblair> #info meeting should be used for discussion and for identifying changes related to priority efforts
19:07:31 <jeblair> #info please refrain from simply listing changes that you would like reviewed
19:07:56 <jeblair> the other announcement is much more fun
19:08:05 <jeblair> #info pleia2 has been nominated for red hat's women in open source award
19:08:05 <jeblair> #link http://www.redhat.com/en/about/women-in-open-source
19:08:10 <anteaya> yay
19:08:15 <yolanda> congrats!
19:08:18 <nibalizer> woot woot
19:08:19 <jeblair> pleia2: congratulations on your nomination! :)
19:08:28 <clarkb> nice! congrats
19:08:28 <mordred> congrats pleia2 !
19:08:34 <jesusaurus> awesome!
19:08:42 <krtaylor> congrats!
19:08:44 <fungi> i'll note that none of her competition lists as many years of open source experience either... she's a shoe-in!
19:09:04 <jeblair> #topic Actions from last meeting
19:09:20 <jeblair> fungi collapse image types
19:09:27 <fungi> it's in progress
19:09:32 <jeblair> fungi: i think this is an ongoing thing...but you've definitely started!
19:09:41 <fungi> #link https://review.openstack.org/156698
19:09:51 <fungi> i'm manually testing that on a held worker right now
19:09:58 <fungi> it's still missing at least a few packages
19:10:09 <fungi> but yeah, it's not going to be a quick transition
19:10:25 <jeblair> fungi: cool... we'll come back to that in a minute then.  thanks.
19:10:25 <fungi> we'll need to pilot it a little and then migrate somewhat piecemeal i think
19:10:29 <fungi> yep
19:10:33 <jeblair> jhesketh look into log copying times
19:10:33 <jeblair> jhesketh move more jobs over
19:10:53 <jhesketh> So more jobs over is here https://review.openstack.org/#/q/status:open+project:openstack-infra/project-config+branch:master+topic:enable_swift,n,z
19:11:09 <jhesketh> But unfortunately I haven't looked into the copying times yet
19:11:11 <clarkb> jhesketh: I just got through those changes, -1 on the last one due to indexes not being right if you run the swift upload multiple times
19:11:20 <jeblair> #action jhesketh look into log copying times
19:11:31 <clarkb> jhesketh: but the others I am +2 on, approved the first but wanted more consensus we are ready for the other jobs
19:11:41 <jhesketh> clarkb: thanks
19:11:55 <jeblair> sdague look at devstack swift logs for usability
19:12:04 <jeblair> sdague: did you have a chance to look at those?
19:12:10 <jhesketh> The copying times probably requires looking into what the requests library is doing
19:12:44 <fungi> if it gets opaque, sigmavirus24 is the requests maintainer and always very helpful
19:12:50 <sdague> jeblair: yeh, they seemed to be roughly the same fetch times from what I saw
19:12:53 <sigmavirus24> hi
19:12:56 <jhesketh> sdague gave me some feedback I've done here
19:12:58 <jhesketh> https://review.openstack.org/#/q/status:open+project:openstack-infra/project-config+branch:master+topic:swift_log_index_improvements,n,z
19:13:00 <sdague> at least within margin of error
19:13:07 <jeblair> sdague: and the index listing format, etc, was okay?
19:13:21 <clarkb> sigmavirus24: will probably ping you post meeting if that works to talk about requests
19:13:22 <jhesketh> fungi: oh cool, thanks
19:13:26 <sdague> jeblair: oh, I provided jhesketh with feedback
19:13:28 <sigmavirus24> clarkb: :thumbsup:
19:13:36 <sdague> I haven't looked at those patches yet, will put that on my queue
19:13:36 <jeblair> sdague, jhesketh: oh cool
19:14:01 <jeblair> zaro to chat with trove folks about review-dev db problems
19:14:06 <jhesketh> Still need to decide how to do the help pages for logs
19:14:13 <jeblair> zaro: i think we figured this out, right?
19:14:22 <zaro> correct.
19:14:36 <fungi> since i upped the wait timeout on the review-dev instance, i haven't seen it recur
19:14:36 <zaro> i believe its fixed as well, right fungi ?
19:14:51 <zaro> i concur with fungi
19:14:56 <fungi> however we still need to apply similar updates to other trove instances for consistency
19:15:07 <jeblair> the default timeout was set differently based on when our instances were created, and indeed, review-dev was set to a very low value
19:15:14 <fungi> we can get into details as to what that entails after the meeting
19:15:17 <jeblair> so we'll want to be cognizant of that in the future
19:15:37 <jeblair> zaro, fungi: thanks!
19:15:38 <jeblair> #topic Priority Specs
19:16:05 <jeblair> i took a quick look at the specs, and i think there are at least two we should look at and get merged soon
19:16:15 <jeblair> #link openstack_project refactor https://review.openstack.org/137471
19:16:28 <jeblair> #link In tree third-party CI https://review.openstack.org/139745
19:17:00 <jeblair> i think both will help us continue on the path to downstream reusability... thereby putting us all in the same boat and in a better position for collaboration
19:17:12 * nibalizer agree
19:17:25 <clarkb> +1
19:17:29 <clarkb> will take a look at those today
19:17:31 <fungi> on a related note, should the (formerly priority) puppet module split spec get moved to implemented? if so i'll whip up a quick change to take care of that
19:17:34 <mordred> ++
19:17:42 <anteaya> fungi: +1
19:17:54 <mordred> (I was ++-ing jeblair  - but also agree with fungi )
19:17:59 <nibalizer> fungi: yes.... i think there is one last thing in the storyboard let me check
19:18:16 <fungi> nibalizer: oh, cool. we can dig in after the meeting in that case
19:18:25 <jeblair> #topic Priority Efforts (New efforts?)
19:18:54 <jeblair> so our current priority efforts are: swift logs, nodepool dib, docs publishing (waiting on swift logs completion), and zanata
19:19:01 <jeblair> i think we have room to add 2 or 3 more
19:19:06 <mrmartin> jeblair: askbot?
19:19:09 <jeblair> especially since swift logs is hopefully winding down
19:19:14 <mrmartin> we are running without backup
19:19:19 <anteaya> would the two new specs be candidates?
19:19:37 <jeblair> mrmartin: i think askbot is a good candidate due to the importance to the community, yeah
19:19:47 <jeblair> anteaya: i think so; i'd probably lump them together
19:19:49 <mrmartin> the spec is in the review queue
19:20:01 <anteaya> jeblair: makes sense
19:20:03 <mrmartin> and the patches also
19:20:17 <clarkb> askbot and the puppet items make good candidates imo
19:20:26 <mrmartin> but requires some effort from infra side, launch instance, migrate db. etc.
19:20:28 <jeblair> fungi's work on images maybe too...
19:20:29 <zaro> isn't gerrit upgrade in PE somewhere?
19:20:38 <jeblair> zaro that's a good one
19:20:41 <fungi> jeblair: yeah, that's sort of taking on a life of its own outside of the dib work
19:20:59 <clarkb> fungi: jeblair though still very tightly coupled to the dib efforts
19:22:26 <jeblair> so, how about: add gerrit upgrade, askbot, and the puppet work; include image consolidation in the dib item?
19:22:40 <anteaya> I can buy that
19:22:50 <clarkb> sounds good to me
19:22:56 <jhesketh> +1
19:23:10 <fungi> wfm
19:23:36 <jeblair> #agreed new priority efforts: gerrit upgrade; askbot, third-party and openstack_project puppet efforts
19:23:37 <nibalizer> +1
19:23:52 <jeblair> #agreed image consolidation to be included in nodepool dib effort
19:24:05 <jeblair> #topic Priority Efforts (Swift logs)
19:24:16 <jeblair> so i think we probably covered most of this in the actions section
19:24:23 <jeblair> #link https://review.openstack.org/#/q/status:open+project:openstack-infra/project-config+branch:master+topic:swift_log_index_improvements,n,z
19:24:26 <jeblair> #link https://review.openstack.org/#/q/status:open+project:openstack-infra/project-config+branch:master+topic:enable_swift,n,z
19:24:32 <jeblair> i'll just throw those in here for reference
19:24:32 <jhesketh> Yeah I've got nothing else this week sorry
19:24:52 <jeblair> np
19:25:00 <jeblair> #topic #topic Priority Efforts (Nodepool DIB)
19:25:18 <jeblair> #topic Priority Efforts (Nodepool DIB)
19:25:37 <mordred> :)
19:25:59 <mordred> I have verified image upload and launch to work on both clouds
19:26:03 <mordred> at least on ubuntu
19:26:21 <jeblair> mordred: is this with current or pending nodepool changes, or with shade?
19:26:37 <mordred> with shade - yolanda has started hacking on my pending nodepool changes to get them finished up
19:26:38 <clarkb> There are still a handful of nodepool bug fixes that have come out of the dib work up for review. I have added tests for nodepool commands too. Reviews would be great if only because less bugy nodepool makes the dibification less confusing
19:26:48 <yolanda> yes, i wanted to raise a pair of questions
19:26:55 <fungi> i'm hoping to finish banging out the package list later today and get an experimental job landed to run nova python27 unit tests on devstack-trusty nodes
19:27:11 <yolanda> so i was talking with mordred about some ensureKeypair method, jeblair, do you know more details about it?
19:27:13 <jeblair> fungi: wow!
19:27:24 <clarkb> fungi: woot
19:27:35 <fungi> well, i think what i have in wip is already close, just need to finish confirming the missing bits
19:27:49 <clarkb> yolanda: can you expand for the rest of us?
19:27:52 <mordred> jeblair: I was telling yolanda about our discussion around keypair management and realied that I had not written down good notes -so I think it might be worth the three of us syncing up on that topic again
19:28:10 <mordred> clarkb: basically right now the logic for handling keypairs is in nodepool.py not provider_manager.py
19:28:33 <jeblair> sure, i think the general thing is that nodepool should not require a pre-existing keypair
19:28:34 <mordred> so in figuring out how to put an api in front of it similar to the one we use for floatingip
19:28:35 <yolanda> so i moved that logic to provider manager, but i don't feel that's what we will need
19:28:45 <mordred> jeblair: ++
19:29:05 <fungi> seems like a good improvement
19:29:09 <jeblair> currently, it creates one per instance; i think it would be okay to create one and cache it, but then it would also need to keep track of it
19:29:34 <clarkb> How does this interact with dib?
19:29:36 <mordred> yup. also - it's entirely feasible that nodepool might not need to create one at all - if one considers the dib case
19:29:36 <jeblair> (that can be done, it's just a bit more work)
19:29:40 <fungi> would hopefully minimize future keypair leaks
19:30:00 <mordred> where the keypair can quite happily be directly baked into the image
19:30:09 <mordred> so the nodepool logic should almost certainly handle those deployment choices
19:30:19 <jeblair> yeah, it may not be worth a lot of effort for us
19:30:19 <clarkb> mordred: the only issue with that potentially is downstream ocnsumer of our images wouldn't have access to our private key(s)
19:30:30 <clarkb> mordred: So, we do still need the otherthing
19:30:31 <yolanda> right now, if we don't pass a key name, it generates one based on hostname, then adds it
19:30:35 <yolanda> if image fails, it removes
19:30:41 <mordred> clarkb: right. turns out dib already has an element which says "please inject the key from the user that built this image"
19:30:58 <clarkb> mordred: yes but that doesn';t fix the issue for people taking our images that we prebuilt
19:31:00 <mordred> clarkb: so there are ways we could chose to do that and still be friendly to downstreams
19:31:01 <fungi> mordred: but if we publish our images, other people can't reuse them easily
19:31:04 <mordred> clarkb: this is correct
19:31:15 <jeblair> mordred: so if we do that wile running as "the nodepool/jenkins/whateverwewantocallit user" then we and downstream users should be set
19:31:32 <mordred> jeblair: yes - other than the direct-binary-reconsumption thing
19:31:40 <clarkb> jeblair: not downstream users that consume the image directly
19:31:48 <clarkb> I don't think we can have nodepool stop managing keys
19:31:48 <jeblair> yeah, was responding to the earlier thing
19:32:02 <clarkb> could be toggleable but the key management is still important
19:32:06 <mordred> yes
19:32:14 <jeblair> clarkb: it's important for snapshot images, yes
19:32:14 <mordred> I totally think nodepool needs teh ability to manage keys
19:32:30 <mordred> I'm just saying taht we may not choose to use that feature in our config - although we _might_ choose to use it
19:32:42 <mordred> so nodepool needs to grok that someone can make a choice about that
19:33:10 <jeblair> any other dib related things to talk about?
19:33:20 <yolanda> mm, related to the same work
19:33:27 <yolanda> there is a missing feature for get capabilities
19:33:32 <yolanda> this should be created in shade, right?
19:33:35 <mordred> yes
19:33:41 <clarkb> get capabilities?
19:33:47 <mordred> if shade is missing a thing that nodepool needs, it shoudl be added
19:33:56 <jeblair> clarkb: ask the cloud what its capabilities are
19:34:16 <clarkb> I see, floating ips and so on for example?
19:34:23 <yolanda> in nodepool there is some code that checks if cloud has os-floating-ips capabilities
19:34:27 <yolanda> yep
19:34:34 <mordred> right
19:34:36 <mordred> HOWEVER
19:34:42 <mordred> nodepool will stop doing that particular thing
19:34:52 <mordred> since shade does the right thing in that case already
19:35:04 <yolanda> i just removed that from nodepool directly
19:35:08 <jeblair> yay
19:35:10 <clarkb> also how is this related to dib? (I am trying to make sure I understand if these things are requirements or nice to have or whatever)
19:35:12 <mordred> yah. but it's an excellent example
19:35:18 <yolanda> but i wanted to ensure if that was really fine, or need some extra work on shade
19:35:39 <mordred> clarkb: it's related to dib because the logic to deal with glance across clouds is in shade - and it's VERY ugly
19:35:52 <clarkb> mordred: specifically get capabilities thoug
19:36:02 <clarkb> mordred: we don't need to accmodate capabilities today for dib or do we?
19:36:14 <mordred> no- but we do need to deal with it in the shade patch
19:36:19 <clarkb> I see
19:36:41 <mordred> so the question winds up being "add get capabilities support to shade" OR "remove need for it by adding logic to shade"
19:36:50 <asselin> o/
19:36:52 <mordred> I prefer the second, but the first could also work/be better
19:37:08 <yolanda> are there more use cases that need capabilities?
19:37:10 <mordred> depending on scope/effort and how general the check is vs. specific to nodepool logic
19:37:20 <mordred> yolanda: only other one I know of it keypair extension
19:37:27 <jeblair> i think the end-state is that nodepool should not need it; it should all be in shade...
19:37:40 <jeblair> so if that's easy, go with that; if it makes something too big and complicated, then we can do it piecemeal
19:37:48 <mordred> ++
19:37:56 <mordred> see, jeblair says things clearly
19:38:20 <anteaya> mordred: he is a good translator for you, keep him close
19:38:31 * mordred hands jeblair a fluffy emu
19:38:39 <jeblair> #topic Priority Efforts (Migration to Zanata)
19:39:04 <mrmartin> I guess pleia2 is not here today, she uploaded a new patch, requires review / testing
19:39:13 <jeblair> #link https://review.openstack.org/#/c/147947/
19:39:35 <jeblair> #topic Priority Efforts (Askbot migration)
19:39:38 <fungi> mrmartin: yeah, she's travelling for several speaking engagements at a couple of free software conferences
19:40:10 <jeblair> #link https://review.openstack.org/154061
19:40:18 <jeblair> that's probably the first thing we ought to do -- review that spec :)
19:40:19 <mrmartin> ok for askbot, I guess the tasks are clear, if anything comes up, let's discuss that.
19:41:41 <jeblair> #link https://review.openstack.org/140043
19:41:46 <jeblair> #link https://review.openstack.org/151912
19:41:58 <fungi> also i did update the https cert for the current non-infra-managed server and stuck it in hiera using the key names from the proposed system-config change
19:42:14 <jeblair> and it looks like there's a solr module we can use without needing our own, which is good (i hope)
19:42:17 <jeblair> fungi: thanks
19:42:34 <jeblair> #topic Priority Efforts (Upgrading Gerrit)
19:42:36 <mrmartin> jeblair, yes with some limitation the solr works, I've tested
19:42:51 <jeblair> so we set a date last week and sent an announcement
19:42:54 <jeblair> for the trusty upgrade
19:43:13 <jeblair> i believe we're planning on doing that first, and then upgrading gerrit to 2.9 later?
19:43:25 <anteaya> 3rd party folks paying attention seem to know about the ip change coming up
19:43:32 <clarkb> ya since 2.9 needs a package only available on trusty (or not available on precise)
19:43:32 <zaro> so here are the changes for that : https://review.openstack.org/#/q/topic:+Gerrit-2.9-upgrade,n,z
19:43:42 <zaro> that's for after moving to trusty
19:43:48 <jeblair> clarkb: right, but specifically i meant not doing both at once
19:43:59 <asselin> I added it to tomorrow's cinder meeting
19:44:04 <anteaya> how much time to do we need between the os upgrade and the gerrit upgrade?
19:44:32 <anteaya> asselin: cool
19:45:43 <anteaya> if we don't have a reason to wait, why would we wait?
19:45:48 <asselin> will the server be up beforehand to test, and the swith made only on the date specified?
19:45:59 <anteaya> asselin: yes
19:45:59 <asselin> *switch
19:46:06 <jeblair> asselin: it will not be available to test
19:46:10 <fungi> well, also part of the question is, does only doing the distro upgrade by itself shrink the window we need for the outage (including time needed to roll back if necessary)
19:46:47 <fungi> but yes, more important is that changing too many things at once makes it harder to untangle where a bug came from
19:46:47 <jeblair> fungi: i don't... i think rollback is probably faster when doing a server switch
19:46:52 <zaro> i can't seem to find the announcement, could someone please provide a link?
19:46:56 <asselin> jeblair, it would be good for 3rd party folks to test our firewall settings, if there's "something" on the other end
19:47:07 <asselin> #link http://lists.openstack.org/pipermail/openstack-dev/2015-February/056508.html
19:48:35 <zaro> i agree with fungi, maybe we should wait a week between going to trusty then gerrit 2.9 ?
19:48:35 <jeblair> so if folks don't want to do both at once, then we probably need to let it sit for at least a week or two before we attempt the gerrit upgrade
19:48:55 <jeblair> zaro: can you look at the schedule and propose a time for the gerrit upgrade during our next meeting?
19:49:08 <zaro> will do
19:49:10 <jeblair> #action zaro propose dates for gerrit 2.9 upgrade
19:49:15 <fungi> zaro: also take the release schedule into account there
19:49:19 <clarkb> last time we did this we separated OS from Gerrit upgrades. So that plan of action sounds good to me
19:49:36 <jeblair> thanks
19:49:37 <jeblair> #topic supporting project-config new-repo reviews with some clarity (anteaya)
19:49:49 <anteaya> #link https://etherpad.openstack.org/p/new-repo-reviewing-sanity-checks
19:50:05 <anteaya> so I'm frustrated with my reviews on project-config
19:50:12 * asselin notes that users won't have access to new gerrit https during time between new os and gerrit upgrade
19:50:16 <jeblair> anteaya: agree that adding expectations to README in project is a good idea
19:50:23 <anteaya> so I tried to capture some thoughts on this etherpad
19:50:55 <AJaeger> anteaya: sofar I asked for a patch to governance - you want to have it merged first?
19:50:56 <anteaya> that captures the essence of what I am feeling
19:51:16 <anteaya> well the name is still in doubt as far as the tc is concerned
19:51:17 <clarkb> also note that new project creation has a race
19:51:22 <clarkb> so I haven't really been reviewing any of them
19:51:26 <anteaya> yet is it a reality in git.o.o right now
19:51:43 <jeblair> clarkb: oh, er, that should probably be the first thing we talk about in this meeting :(
19:52:01 <jeblair> clarkb: let's defer that discussion though
19:52:25 <anteaya> does anyone have any other thoughts?
19:52:47 <clarkb> anteaya: my concern is if we make this so complicated reviewers and users won't want to touch it
19:52:55 <anteaya> clarkb: fair enough
19:53:04 <anteaya> I can' tbring myself to review project-config right now
19:53:06 <clarkb> traditionally we haven't been stackforge police
19:53:10 <fungi> if we go with governance patch being a prereq, then using depends-on crd in the commit message could ensure that even if we approve the new project it won't get created until teh tc approves the governance change (for official projects)
19:53:15 <anteaya> since my concerns don't seem to be incorporated
19:53:37 <AJaeger> fungi, I would be fine with that one
19:53:59 <clarkb> but using CRD for the governance change dep for openstack/ projects seems reasonable
19:54:11 <jeblair> anteaya: your first point is regarding stackforge; i agree with clark, i'm not certain we should care
19:54:27 <anteaya> well actually it is the use of openstack names in stackforge
19:54:42 <anteaya> since once they are used there openstack losing the ability to say how they get used
19:54:44 <jeblair> anteaya: the second point, yes, things that go into the openstack namespace should be part of an openstack project (for the moment at least)
19:54:48 <anteaya> since as you say, we don't care
19:55:09 <fungi> which as i pointed out in the earlier discussion, there are lots of existing cases of project names in stackforge repo names
19:55:18 <jeblair> anteaya: i feel like in many cases those may be nominative... like "neutron-foo" is the foo for neutron; hard to describe it otherwise...
19:55:28 <fungi> especially for the config management projects in stackforge
19:55:35 <anteaya> networking-foo is what neutron has been using
19:55:49 <jeblair> anteaya: at your suggestion?
19:55:52 <anteaya> when they have reviewed patches that condcern the name use
19:55:56 <anteaya> yes
19:56:02 <anteaya> since they agreed with my point
19:56:14 <anteaya> about losing the ability to use the name for themselves
19:56:20 <jeblair> i'm not certain that i agree that all uses of the word neutron in a project name are incorrect
19:56:23 <anteaya> and determine its value by their actions
19:56:28 <fungi> when they go from stackforge to openstack as big-tent projects/teams should they also rename themselves from stackforge/networking-foo to openstack/neutron-foo?
19:56:37 <mordred> yeah - but I tend to agree with jeblair - stackforge/puppet-neutron, for instance, I do not think needs any blessing from anyone - it is descriptive, it is puppet modules to deploy neutron
19:56:52 <mordred> I don't think we need to police that
19:56:59 <anteaya> config repos are different from driver repos
19:57:02 <anteaya> to me
19:57:13 <nibalizer> also the ammonut of work to change puppet-neutron to anything else is nontrivial
19:57:14 <fungi> that's hard to write a policy around though
19:57:23 <anteaya> but all I am aksing for is for someone from the project to weigh in
19:57:25 <jeblair> it seems even weirder that a neutron driver wouldn't use neutron in the name
19:57:31 <mordred> I can see that - but I don't think it's an area where we want to have an opinion, and should really only have one if we need to
19:57:33 <fungi> "this is bad, except sometimes it's not" is a reviewing policy nightmare
19:58:12 <fungi> i feel like policing stackforge project names is deep, deep bikeshed
19:58:23 <anteaya> but all I am aksing for is for someone from the project to weigh in
19:58:32 <jeblair> anteaya: you have been asking them to change names
19:58:40 <anteaya> yes I have been
19:58:50 <mordred> anteaya: I think that implies that someone from the project has the right to an opinion
19:58:53 <jeblair> you even suggest that in your point in the etherpad
19:59:01 <anteaya> and at this point I am asking for someone from the project to weigh in on a patch that uses the project name
19:59:02 <mordred> that is some how more valid than someone else's
19:59:19 <anteaya> is that not the point?
19:59:25 <anteaya> what else is the point of the name
19:59:29 <mordred> and I'm not sure I agree that is the case - as it puts people into a position to make a value judgement potentially unnecssarily
19:59:40 <nibalizer> the ptl of the projcet probably has plenty of things to worry about, and so when we say 'hey need input' they're gonna say 'link to policy doc plz?' and then the problem is right back on us
20:00:09 <aplawson> Hi folks, just watching the flow of thing, first time IRC to anything OpenStack. p )
20:00:20 <mordred> specifically, it's a potential point for corruption and abuse - if someone doesn't like ewindisch, they might be more inclinded to say "I don't think nova-docker should get to use nova in its name"- when at the end of the day, it is a docker driver for nova
20:00:43 <anteaya> I see it as abuse the other way
20:00:48 <mordred> and that is merely a factual statement. now - if it was a request for openstack/nova-docker
20:00:50 <nibalizer> anteaya: also once you publish software, I'm basically entitiled to write puppet-<sofwtarename> thats kindof not your call if i do it or not
20:01:03 <nibalizer> does that make sense?
20:01:12 <mordred> then it implies a level of credibility related to openstack/nova and I would tehn think that the nova-ptl should be involved
20:01:28 <anteaya> how do we do that after the fact
20:01:42 <anteaya> as we have agreed stackforge can do whatever they want
20:02:10 <fungi> keep in mind that stackforge is just one of many, many hosting options for a free software project. if projectname-foo wants to register its name on github, pypi, rtfd, et cetera we have no control there
20:02:13 <mordred> yup
20:02:19 <anteaya> correct
20:02:39 <anteaya> yet there is a feeling of attachment to openstack via stackforge
20:02:46 <fungi> so exercising that control over stackforge seems counter-productive
20:03:04 <AJaeger> It would be nice to have some guidelines, e.g. on how to name drivers for nova like nova-docker or plugins for neutron like networking-midonet
20:03:23 <AJaeger> to have similiarities
20:03:26 <anteaya> I think naming things only has value if the name means something
20:03:36 <anteaya> and the value can easily degrade
20:03:46 <mordred> yeah - I hear that - but stackforge is very explicitly not the place for that
20:03:46 <fungi> for example, turbo-hipster?
20:03:53 <anteaya> right
20:04:02 <mordred> like, it exists to be a place that does not have a governance opinion
20:04:13 <anteaya> right
20:04:26 <anteaya> which is why I feel the way I do
20:04:26 <mordred> we don't want to make a secondary policy board for stackforge that governs choices people make there
20:04:29 <nibalizer> there will always be exceptions to any naming pegegree, places where something doesn't fit or is covered by two rules
20:04:30 <AJaeger> let me take the action to add information about CRD on governance patch to the infra-manual. ok?
20:04:32 <anteaya> if I am the only one, that is fine
20:04:35 <anteaya> move ahead
20:04:36 <fungi> oh, we're way over time too. sdague wasn't going to do a tc meeting today, correct?
20:04:40 <anteaya> I said what I feel
20:04:41 <sdague> fyi, for anyone looking for the TC meeting, it is cancelled today.
20:04:59 <sdague> fungi: correct, cancelled, just wanted to make sure no one was hanging out waiting for it
20:05:09 <fungi> sdague: well, i was, but...
20:05:10 <sdague> you guys can keep the channel
20:05:13 <clarkb> #action AJaeger update infra manual with info on setting up depends on for CRD between new openstack/* project changes and the governance change accepting that project
20:05:18 <jeblair> and yeah, i thought a few extra minutes would be helpful here
20:05:24 <fungi> agreed
20:05:37 <clarkb> sorry if I jumped the gun on the CRD thing but it seemed like we had consensus on that point
20:05:42 <jeblair> i think so
20:05:49 <fungi> i think we're in a good place to continue this in a change review
20:06:00 <fungi> thanks AJaeger
20:06:18 <jeblair> sorry we didn't get to the other items on the agenda
20:06:32 <jeblair> they'll be up first after priority efforts next time
20:06:51 <jeblair> but if you're curious, there are some links in today's agenda if you want to explore on your own
20:06:55 <jeblair> thanks everyone!
20:06:58 <jeblair> #endmeeting