19:03:35 <fungi> #startmeeting infra
19:03:36 <openstack> Meeting started Tue Apr  4 19:03:35 2017 UTC and is due to finish in 60 minutes.  The chair is fungi. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:03:37 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:03:39 <openstack> The meeting name has been set to 'infra'
19:03:43 <fungi> #link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting
19:03:50 <fungi> #topic Announcements
19:04:07 <fungi> semi-announcement, i'll be travelling and mostly unavailable next week
19:04:25 <fungi> pabelanger has graciously volunteered to chair the meeting on the 11th
19:04:33 <fungi> thanks pabelanger!
19:04:41 <pabelanger> np
19:04:43 <fungi> as always, feel free to hit me up with announcements you want included in future meetings
19:04:51 <fungi> #topic Actions from last meeting
19:05:01 <jlvillal> o/\
19:05:05 <fungi> #action fungi put forth proposal to flatten git namespaces
19:05:14 <fungi> still haven't gotten to that
19:05:25 <fungi> nothing else from last week
19:05:33 <fungi> #topic Specs approval
19:05:59 <fungi> i thought we might have SpamapS's zuul v3 sandboxing approach up in time
19:06:12 <fungi> should it be a last-minute addition or hold off?
19:07:34 <jeblair> it just got another revision, so maybe hold?
19:07:45 <fungi> okay, fair enough
19:08:00 <fungi> #topic Priority Efforts
19:08:25 <fungi> nothing called out specifically here this week, though i know zaro and clarkb have picked back up steam on the gerrit upgrade stuff
19:09:10 <fungi> and zuul v3 and storyboard activity are ongoing and making traction
19:09:20 <clarkb> yes, I'm tired of restarting Gerrit
19:09:29 <fungi> you and the rest of us ;)
19:09:51 <fungi> #topic Ipsilon integration for OpenStack ID (pabelanger)
19:10:01 <pabelanger> Ohai!
19:10:03 <fungi> i guess jeblair may have some input on this one
19:10:39 <pabelanger> I keep getting ping from trystack folks wanting to know who they can drop facebook as there auth backend and use the openstack foundation database. As I understand it, we need ipsilon for this
19:10:47 <fungi> is ipsilon integration preventing trystack.org from switching off facebook group auth?
19:11:08 <pabelanger> maybe? is there another option for them?
19:11:20 <fungi> is this oidc feature stuff missing from the current openstackid implementation, or something else?
19:11:52 <kambiz> we just need to know what options we have.  old facebook code that doesnt work with newton and ocata.
19:12:04 <pabelanger> I am not sure honestly, I thought we in -infra wanted to move away from the current openstackid code base, to ipsilon
19:13:03 <fungi> well, not all of openstackid, just the openid/oidc provider interface... the backend would presumably remain as it is now
19:13:32 <clarkb> and I think plan was to run them side by side, and as long as they use same backend you should be able to migrate from one to the other trivially
19:13:41 <clarkb> at least initially
19:13:49 <fungi> that's at least how the idp.openstackid.org poc was set up
19:14:24 <pabelanger> so, I am not sure how to proceed with trystack.org and there openstackid integration. It is something they have been wanting to do for a while
19:14:33 <pabelanger> I'd like to help, but need some direction
19:14:34 <clarkb> (migration may involve updating root url of ids in a database like what we did with ubuntu one and gerrit recently)
19:14:44 <kambiz> are there docs available to do auth for an ocata install against idp.openstack.org ?
19:14:51 <fungi> have they tried using openstackid.org directly yet?
19:15:03 <fungi> curious what keystone needs which it isn't providing
19:15:15 <kambiz> also we need auto-provisioning of tenants / users based on whether they are members of e.g. a trystack group.
19:15:34 <mordred> kambiz: ah - so you need group membership from the idp?
19:15:45 <fungi> kambiz: at best, there may be documentation for configuring keystone to authenticate via openid in the general sense, and openstackid.org attempts to be a generic openid provider
19:15:49 <kambiz> mordred: correct.
19:16:36 <mordred> fungi: I think the gotcha will be openstackid not implementing (to my knowledge) the group-membership extension
19:16:50 <clarkb> why is group membership necessary?
19:17:01 <kambiz> fungi: want to minimize the administration. so basically users with openstackid creds would request access to a "trystack" group, and once they are granted access, when they login to x86.trystack.org they would be directed to the IDP and sent back to trystack (which would also auto provision their tenant / user, but only if they are members of the group
19:17:09 <fungi> group-based authorization isn't part of the openid 1.x spec as far as i understand, and is instead openid-connect (oidc) and openid 2.x (which ipsilon supposedly has already and openstackid has been working toward implementing but i don't know whether they finished)
19:17:46 <fungi> group administration would be yet another layer there, it sounds like
19:17:48 <notmorgan> uhm. yeah and is that an issue? wouldn't the group be something that would be added on the SP (service provider/consumer)?
19:17:57 <clarkb> I've worked around similar by maintaining the group membership on the application side fwiw
19:18:10 <kambiz> clarkb: *shrug* I guess it's not strickly necessary.  Presumably someone with an openstackid account is already interested in openstack, so they get trystack for free. vs. the current setup which is facebook auth and requires membership in a trystack group on facebook.
19:18:15 <fungi> because you want some software to provide an interface for some unnamed admins to decide who can be in or out of the authorization group?
19:18:16 <notmorgan> clarkb: that is typically the way it works, the SP is usually responsible
19:18:40 <notmorgan> since the IDP otherwise might be leaking that data outside to another source (excluding Microsoft ecosystem and ADFS for various reasons)
19:19:07 <notmorgan> i would hesitate to lump that into the IDP itself.
19:19:19 <fungi> yeah, so trystack would need to grow a group authorization management interface
19:19:56 <notmorgan> fungi: depending on what trystack does, it may have that alread, not sure if it's just keystone undewr the hood or something else
19:20:00 <notmorgan> already*
19:20:02 <fungi> this is where my lack of understanding of keystone options rules me out of making good suggestions
19:20:29 <notmorgan> it might require a *very* new keystone. but it should work. (new as in Ocata or post Ocata)
19:20:39 <notmorgan> lbragstad: ^ cc for federated identity tech stuff.
19:21:26 <fungi> i think ultimately the desire is to allow people with an openstack foundation account (e.g. via openstackid.org openid login) to use per-account tenants autoprovisioned in trystack, which is a bog-standard rdo deployment?
19:21:46 <kambiz> fungi: gonna paste link to the auth stuff we added to liberty release.
19:21:49 <kambiz> (looking for it)
19:22:07 <wfoster> https://github.com/trystack/python-django-horizon-facebook
19:22:21 <kambiz> it's keystone insofar that once the user is authed, the user and tenant are created (with a dummy/random password)
19:22:48 <kambiz> users can then download the keystonerc_<user> from horizon and never look back to the facebook auth stuff.
19:22:58 <kambiz> (so they can run openstack cli commands)
19:23:00 <wfoster> fungi: that seems about right, more or less.
19:23:22 <fungi> okay, so really the openid auth is just needed far enough to get an api access account created and then they use the autogenerated keystone credentials moving forward?
19:23:43 <fungi> not actually doing keystone federation to an openid provider?
19:24:04 <fungi> or is the desire to move from the former to the latter?
19:24:11 <kambiz> fungi: yes, and just generally a replacement for facebook, and to use an IDP with a deployment in a manner that is prescribed / has wider adoption.
19:24:52 <kambiz> basically, is anyone doing federated authn/authz with auto-provisioning of tenants/users?
19:24:56 <fungi> okay, and so with facebook you got a group administrator role who could add/remove accounts in some group?
19:25:30 <kambiz> fungi: yep.  users ask to join, and we take a quick 5 second glance at the request, and make a judgement call that they are real humans, and not spammers./
19:25:32 <fungi> well, ids rather than accounts from trystack's perspective
19:25:39 <wfoster> trystack needs an oath or similiar identity provider to plug into keystone to auto create accounts after they've been approved, I think we'd defer to people just having an openstack.org account (or LP and signing the CLA?) so long as we have ability of revocation for abuse scenarios, the facebook auth piece just fits in so we can somewhat determine its indeed a human and we get out of business of
19:25:40 <wfoster> manually manauging accounts (even with tools its not scalable).
19:25:41 <kambiz> and once we allow them into the group, they get access to trystack.
19:26:18 <fungi> contributor license agreement sounds completely out of scope for using trystack, fwiw
19:27:13 <fungi> but anyway, i don't know that with either the current openstackid codebase nor switching to an externally (from trystack's perspective) managed ipsilon instance gets you any sort of group management interface
19:28:17 <fungi> it's possible the foundation site maintainers would be able to delegate access to the group management interface they use for www.openstack.org properties, but i also don't know how hierarchical that interface is
19:28:51 <notmorgan> fungi: CLA seems a bit heavy handed to require (so ++)
19:28:57 <fungi> as notmorgan and clarkb pointed out, you can always do group management on the trystack side of things
19:29:08 <notmorgan> and i highly recommend that model if possible
19:29:29 <notmorgan> it doesnt rely on the IDP to grow smarts or have someone manage internally
19:29:35 <jeblair> i keep seeing suggestions that maybe just having a foundation membership account is sufficient.  if we can agree that's the case, then it doesn't seem like there should be any obstacles.
19:29:37 <wfoster> that would be ideal, using delegated group mgmt from openstack.org - we had looked at freeipa for this but i dont think it's ready with the self-service options currently e.g. http://www.freeipa.org/page/V4/Community_Portal#Self-Service_Registration.
19:29:58 <fungi> from something as basic as am htpasswd callout to a list of openid urls to as featureful as a custom-written web management interface or some off-the-shelf groupware
19:30:17 <wfoster> somehwere mod_auth_idc fits in, that's where my understanding of ocata/keystone fall short here.
19:30:28 <fungi> yeah, if just having an openstackid.org id is sufficient, then sounds like it might just work already?
19:31:21 <wfoster> our main thing is to just be sure it's indeed a human and to make sure we can take action in abuse scenarios if needed.  it seems having an openstack.org id would be enough in my opinion, do you concur kambiz?
19:31:39 <fungi> clarkb: was mod_auth_openid the one you used in your etherpad authentication experiments?
19:31:49 <clarkb> fungi: yes, its not perfect but mostly worked
19:31:57 <fungi> so not mod_auth_idc
19:32:18 <clarkb> some of the issues were on openstackid's side of not exactly following the spec and otehrs are bugs in mod auth openid
19:32:19 <fungi> but you managed (after some fixes) to get apache mod_auth_openid working with current openstackid.org
19:32:26 <clarkb> the issues on openstackid side got fixed
19:32:30 <clarkb> yup
19:32:47 <kambiz> wfoster: *nod* that's what I said earlier as well, to clarkb's point
19:33:24 <kambiz> just not sure of mod_auth_openid is enough, since from what pabelanger and ayoung have mentioned in the past, there is no auto-provisioning of the tenant/user
19:33:35 <fungi> okay, so seems to me it's worth testing now to just authenticate against openstackid.org without any group membership requirements and see if it works
19:33:59 <fungi> whichever module you use at the client side
19:34:45 <fungi> i mainly wanted to make sure this isn't blocked on a production rollout of ipsilon (unless it comes with a set of developers eager to hack on getting that going infra-side)
19:35:01 <pabelanger> ya, I wasn't 100% sure of that either
19:35:56 <fungi> because movement there has been slow, so we'd either need more people dedicated to working on that or alternatives which can be used in the interim
19:36:54 <fungi> okay, any objections to testing using trystack.org with openstackid.org as described above, and then circling back around if there are issues identified?
19:37:22 <pabelanger> I'm happy people talked
19:37:24 <pabelanger> :)
19:37:40 * wfoster hands pabelanger a food truck coupon
19:37:51 <fungi> the openstackid.org devs are generally attentive and willing to fix implementation bugs when identified
19:38:16 <fungi> though at this particular moment they're busy flinging load tests at the openstackid-dev server i think
19:38:50 <ayoung> mod_auth_ is not enough for provisioning.  Provisioning is outside the auth standards
19:39:34 <fungi> #agreed The trystack.org maintainers should test integration with current https://openstackid.org/ openid implementation (without oidc group membership lookups), and identify any shortcomings
19:40:00 <fungi> ^ that cover it for now?
19:40:24 <fungi> ayoung: thanks, and yes i'm guessing you need some automation for keystone account/tenant autoprovisioning regardless?
19:40:42 <fungi> ayoung: or is that a keystone feature?
19:40:47 <clarkb> right it doesn't sounds like using ipsilon or openstackid or any other openid backend is going to change that
19:41:00 <clarkb> something needs to do it and those tools don't solve that problem
19:41:15 <fungi> presumably something is already doing it for people logging in via facebook
19:41:39 <fungi> so i would guess that automation remains mostly the same anyway
19:42:09 <fungi> okay, 18 minutes remaining, so moving on
19:42:13 <wfoster> kambiz: fungi that sounds great to me, thank you.  we'll also work on patching the fb auth bits in the meantime until we can cutover.  we have a parallel newton we can upgrade to ocata to start testing openstackid.  who should we ping about implementation questions after we've read up?
19:42:40 <fungi> wfoster: us but also smarcet and jpmaxman in #openstack-infra are the main openstackid maintainers
19:42:46 <fungi> #topic Stop enabling EPEL mirror by default (pabelanger/mordred)
19:42:55 <pabelanger> mordred proposed 453222 today, and would like to see if anybody objects. If not, I can help keep an eye on jobs failures for EPEL dependencies.
19:43:05 <pabelanger> https://review.openstack.org/#/c/453222/
19:43:37 <pabelanger> today both puppet and tripleo have moved away from EPEL, since it was unstable and often broke
19:43:38 <fungi> opposite of objecting, i'm wholeheartedly in favor
19:43:39 <clarkb> someone should check if devstack needs it for anything
19:43:41 * mordred is in favor of it
19:43:49 <clarkb> ianw: ^ do you know?
19:43:49 <mnaser> just an fyi, kolla depends on it
19:43:51 <mordred> clarkb: we'll find that out very quickly :)
19:43:51 <mnaser> so we'll have to check
19:44:02 <mnaser> https://github.com/openstack/kolla/blob/master/tools/setup_gate.sh#L55-L57
19:44:03 * fungi realizes it's not a given that mordred is necessarily always in favor of changes he proposes
19:44:07 <jpmaxman> Happy to help wfoster tied up in a meeting but feel free to reach out
19:44:08 <ianw> i don't believe so
19:44:08 <mordred> mnaser: I believe kfox111 said he had a patch queued up for kolla-k8s
19:44:16 <pabelanger> mnaser:  sudo yum-config-manager --enable epel
19:44:20 <pabelanger> is how to enable it
19:44:29 <mordred> fungi: I am happy to propose the patch so it can be debated - but in this case, I also like it
19:44:37 <mnaser> this was just gate code to use the mirrors
19:44:43 <mnaser> so we'll just have to drop the mirrors ill double check this kolla site
19:44:46 <mnaser> s/site/side/
19:44:58 <pabelanger> we'll keep mirroring EPEL, but this means jobs now opt-in
19:45:04 <pabelanger> and understand, it might not be the most stable
19:45:13 <mnaser> gotcha
19:45:21 <fungi> also the enablement command should be idempotent, so you could start running it before we drop the default configuration presumably
19:45:40 <mordred> well - so mnaser/kolla have to do their own config since it's being done during container build in the containers
19:45:46 <mnaser> we already do yum install epel-release in kolla
19:45:51 <pabelanger> okay, good! This is something I've hope to do for a long time :)
19:45:53 <mnaser> but it just seds things around in the gate to point towards mirror
19:45:54 <mordred> which means nothing should change for that koll ajob
19:46:16 <mordred> this is only about the epel rpeo being enabled on the vm itself
19:46:24 <ianw> on CI images, epel-release should already be installed
19:46:25 <pabelanger> fungi: I'm happy to leave it with your to approve when you think we are ready
19:46:38 <pabelanger> you*
19:46:55 <mordred> ianw: yes. but not inside of containers being built on CI images
19:46:57 <mnaser> inside the docker images, we do yum install epel-release, but we also do a sed in there to point towards mirrors, which means if the epel mirrors stop the builds will fail (unless im missing something obvious *shrug*
19:47:09 <mordred> mnaser: yah - we're not stopping the mirrors
19:47:18 <mordred> mnaser: we're only stopping from enabling them by default in the CI images
19:47:21 <mnaser> oh
19:47:23 <pabelanger> ya
19:47:40 <mnaser> sorry
19:47:41 <mnaser> gotcha now
19:47:42 <fungi> sounds like kolla wasn't relying on that part regardless
19:47:46 <mordred> we should likely be extra clear with folks on this point :)
19:47:49 <mnaser> nope it wasn't
19:48:04 <pabelanger> I think puppet and tripleo were the biggest, and they were not affect with todays outage
19:48:27 <fungi> okay, sounds safe to roll forward in that case
19:48:50 <fungi> i guess let's give the ml thread a little more time to collect feedback unless this is urgent enough we need to move forward asap
19:49:06 <fungi> but i don't see any obvious blockers
19:49:08 <pabelanger> WFM
19:49:40 <fungi> mainly want to make sure nobody can say they were paying attention to the ml but were caught by surprise that we changed this
19:49:54 <fungi> or weren't given time to add that command to their jobs
19:50:27 <fungi> anything else on this? otherwise i can try to cram in a last-minute topic or two
19:50:54 <pabelanger> none, thank you
19:51:27 <fungi> #topic Mailman 2.1.16 fork or Xenial upgrade soon?
19:51:42 <fungi> i think jeblair asked this over the weekend
19:52:07 <clarkb> I think we should xenial soon. Upgrade is fresh and process should be very similar
19:52:16 <fungi> apparently there's some content encoding nastyness in the trusty mailman packages, fixed subsequentlt upstream and never backported
19:52:29 <jeblair> there was talk about looking into whether a backport was feasible (it's a one line patch).  i don't know if anyone has done that.
19:52:36 <jeblair> but yeah, a further upgrade seems doable.
19:52:50 <pabelanger> I can look into the backport, if we want to stay. But also happy to xenial
19:52:58 <fungi> the version in xenial is apparently new enough not to suffer, but of course comes with its own new and as of yet unidentified bugs most likely ;)
19:53:31 <fungi> so sort of a roll of the dice, but on the other hand we'll need to upgrade it to xenial (or something else) eventually too
19:54:15 <fungi> i'm leaning toward going ahead and doing a second upgrade maintenance, but i don't have time to drive it so would need a volunteer regardless
19:54:35 <pabelanger> we know a few ubuntu packagers, we should atleast see if we can get the patches into -updates
19:54:45 <fungi> and i'll admit if ubuntu can sru that patch into trusty-updates, that's a lot less work for us
19:55:02 <jeblair> i'd prefer not to have to upgrade until after the summit.
19:55:10 <fungi> plus, that benefits people running trusty mailman packages besides us
19:55:22 <fungi> yeah, we're definitely getting close to the summit now
19:55:36 <pabelanger> I'll get an SRU request in place
19:55:46 <fungi> pabelanger: great, thanks!
19:55:53 <clarkb> ya, I'm just rarely hopeful those go through quickly (considering the python3 experience on a current LTS)
19:55:58 <fungi> jeblair: have a link to the defect report or upstream patch?
19:56:20 <fungi> well, mailman is much more of a "leaf" package than an interpreter would be
19:56:36 <fungi> a lot fewer reverse-depends to consider
19:57:20 <jeblair> fungi: i'll look it up
19:57:50 <jeblair> #link mailman bug https://bugs.launchpad.net/mailman/+bug/1251495
19:57:50 <openstack> Launchpad bug 1251495 in mailman (Ubuntu Trusty) "Lists with topics enabled can throw unexpected keyword argument 'Delete' exception." [High,Triaged]
19:57:59 <fungi> thanks!
19:58:18 <fungi> #action pabelanger Open an Ubuntu SRU for bug 1251495
19:58:38 <fungi> should just be able to add the ubuntu package as an affected project on that mailman bug and go from there
19:58:51 <pabelanger> sure
19:59:07 <fungi> #topic Open discussion
19:59:11 <fungi> one minute remaining
19:59:17 <mtreinish> firehose!
19:59:23 * cmurphy plugs topic:unpuppet-zuul-workers
19:59:28 <jeblair> i will be unavailable next week
19:59:34 <fungi> i was going to also try and work in a topic for requested repo renaming maintenance scheduling
19:59:39 <fungi> jeblair: me too!
19:59:45 <mtreinish> #link https://review.openstack.org/#/q/status:open++topic:more-firehose
19:59:50 <jeblair> fungi: see you there!  ;)
19:59:54 <fungi> sounds like it'll be a (quiet|busy) week for infra
20:00:07 <fungi> oh, and we're out of time
20:00:11 <fungi> thanks all!
20:00:16 <fungi> tc meeting, up next. stay tuned!
20:00:19 <mtreinish> also if an infra-root is available to help me push things into prod this week, that'd be nice :)
20:00:19 <fungi> #endmeeting