18:00:31 #startmeeting keystone 18:00:32 Meeting started Tue Jun 28 18:00:31 2016 UTC and is due to finish in 60 minutes. The chair is stevemar. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:00:33 o/ 18:00:34 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:00:34 howdy 18:00:36 hiiii o/ 18:00:37 The meeting name has been set to 'keystone' 18:00:43 o/ 18:00:44 o/ 18:00:44 o/ 18:00:46 hi all o/ 18:00:49 o/ 18:00:52 o/ 18:00:58 \(~_~)/ 18:01:12 o/ 18:01:30 i wonder if dolphm is back from kung fu noodle 18:01:39 no shaleh? 18:01:41 agenda is here https://etherpad.openstack.org/p/keystone-weekly-meeting 18:01:54 o/ 18:02:04 o/ 18:02:44 alrighty, let's get this show on the road 18:02:50 #topic Make oslo.messaging and caching required for keystonemiddleware 18:03:05 #link https://bugs.launchpad.net/keystonemiddleware/+bug/1540115 18:03:05 Launchpad bug 1540115 in keystonemiddleware "use extras to install optional dependencies" [Medium,In progress] - Assigned to Steve Martinelli (stevemar) 18:03:44 o/ 18:03:44 there was a bug to use "extras" - like how we do it in keystoneauth - for the audit and caching support in keystonemiddleware 18:03:54 but jamielennox brought up a good point 18:04:11 we highly recommend caching, so why not make it required 18:04:35 why do we recommend caching? Have we done performance measurements? 18:04:37 and audit support only needs oslo.messaging, which most services will already depend on 18:04:54 bknudson_: of course not, but caching fixes everything (tm) 18:05:39 bknudson_: i believe we have caching enabled in the gate 18:05:43 and breaks everything else :-) 18:06:07 and even if you don't have something setup, it'll use an in-memory cache 18:06:26 the memory cache is terrible 18:06:29 and needs to die. 18:06:29 so we can never require that a deployment turn on caching or messaging 18:06:37 die cache die 18:06:54 however for this i think it safe to say that we recommend it all so heavily that we should just make the dependencies required to make it work always installed 18:07:17 messaging used for audit only, right? 18:07:21 rather than have to do the pip install keystonemiddleware['cache,audit_messaging_notifications'] 18:07:24 ayoung: correct 18:07:36 stevemar: using an in-memory cache in production would be terrible if it doesn't clean up after itself; and i'm assuming it doesn't 18:07:37 I thought the point of oslo cache was that you could have different implementations (not just memcached) 18:08:19 dstanek: it does. ghe issue with the in-memory cache is that multiple processes dont share 18:08:37 and the in-memory cache cleans up [runs a for loop on the dict] on every get. 18:08:40 notmorgan: least common algorithm? 18:09:03 dstanek: tokens can be validated on one process and fail on another 18:09:07 dstanek: for example 18:09:16 (eek, sorry to be late) 18:09:27 notmorgan: i'm more worried about absorbing all memory for the cache 18:09:29 dstanek: so subsequent requests succeed/fail/succeed because the processes cached data 18:09:30 henrynash: no worries :) 18:09:49 unless it is bounded; only keeps last X entries 18:09:54 basically don't enable cache with in-memory caching as a default ANYwhere it isn't already 18:09:57 unfortunately we tried to remove that and devstack suffered and we got slapped for it so we're stuck with that for a little bit 18:10:02 and we need to deprecate the current one 18:10:09 jamielennox: yep 18:10:24 if someone advocates adding in-memory cache anywhere else, i'm sorry, but no. 18:10:38 and that being slapped for it was a massive overreaction 18:10:43 revert as jamie metioned: https://github.com/openstack/keystonemiddleware/commit/70a9754ae68c3eb4d5645c76f3e61052486e9054 18:10:50 how about messaging? 18:10:53 bknudson_: so yeah, it does actually work :P 18:11:12 bknudson_: cause the neutron gates kept timing out iirc 18:11:15 ayoung: so if you install audit middleware but don't install oslo.messaging it just logs everything 18:11:21 ayoung: what about it? 18:11:24 stevemar: the fact is the gate doesn't need the in-memory cache. 18:11:47 which is not particularly useful - and can actually be accomplished by configuring oslo.messaging that way if you're really interested 18:12:13 but i see no reason to do that, so if you want to use audit middleware you have to install oslo.messaging seperately 18:12:18 so, if this throws thing into in-memory cache by default with audit middleware 18:12:22 stevemar, should we require it 18:12:24 i'm against this. 18:12:26 None of this is going to affect me since we've got deployment tools. 18:12:38 should we require oslo-messaging, caching go bye bye 18:12:41 ayoung: doesn't hurt it 18:12:41 the question is - is it just going to be easier to make oslo.messaging a direct depenedency 18:13:00 ayoung: we will want caching, we just want to use oslo.cache instead of the current thing 18:13:18 notmorgan: against because of extra deps? 18:13:32 jamielennox: so maybe wait until we move to oslo.cache, i don't have a timeline for that 18:13:38 swift has their own cache implement, just fyi 18:13:47 moving to oslo.cache first makes more sense. 18:13:49 oslo.messaging is a big dependency and audit is not that widely used 18:14:01 i think the oslo.cache one is an easy yes 18:14:29 * stevemar pokes notmorgan 18:14:38 jamielennox: i think notmorgan is against using the in-memory cache 18:14:43 basically 18:14:50 notmorgan: would you still be against it if we used oslo.cache 18:15:00 i am against defaulting to the in-memory cache 18:15:03 -2 for in-memory cache from me - i'm fine with adding the deps tough 18:15:11 jamielennox, message is not a dependency on audit middleware 18:15:14 not a hard dependency 18:15:19 if we add the dep we need to make sure the in-memory cache is NOT used anywhere. 18:15:24 oslo.cache should be a hard-dep 18:15:25 can we have an in-memory cache that only caches a few tokens? 18:15:27 audit middleware can send audit to the local logs 18:15:34 bknudson_: no. lets not even bother. 18:15:44 OK..yes to deps on oslo cache and oslo messaging. No to in memory. 18:15:49 Next topic 18:15:56 ayoung: not quite 18:16:08 yes to depends on oslo.cache, use extras for oslo.messaging 18:16:08 the extra deps don't really matter imo. 18:16:31 most everything uses oslo.messaging anyway. 18:16:39 stevemar, I have to figure out how to map extras to RPM dependencies 18:16:46 Does swift use oslo.messaging? 18:16:48 notmorgan: oh, yea, dump that asap 18:16:49 I'd rather not use extras and just force the depends 18:16:55 ayoung: "recommended" 18:17:00 ayoung: or "suggested" 18:17:05 gyee: it's not a dep, but it's pretty useless without it 18:17:07 ayoung: or in RPM "required" 18:17:08 nope, swift does not use oslo.messaging 18:17:18 ayoung: extras is really only a pip thing, map it as a hard dep in the rp 18:17:21 rpm* 18:17:26 gyee: swift doesn't everything different, quit looking at them for a comparison :) 18:17:28 jamielennox, no, you can still send audits to a log 18:17:36 notmorgan, ok. THat works for this case 18:17:40 I would guess swift wouldn't like us adding a dependency on oslo.messaging 18:17:40 gyee: why - auth_token already logs requests 18:17:50 jamielennox: for the cadf event 18:17:56 gyee: also oslo.messaging has a log driver you an use to do that 18:17:59 jamielennox, CADF formatted logs, to a separate log file 18:18:04 bknudson_: if swift uses ksm in a deployment, installing oslo.* is already happening 18:18:23 jamielennox, oslo.messaging's log driver does not work for swift 18:18:32 alright, next topic, i wrote the outcome in the etherpad 18:18:38 gyee: why? 18:18:48 gyee: i mean - didn't i just fix that? 18:19:22 jamielennox: moving on, and yeah, you just did 18:19:26 #topic Should we add a "migration completion" command to keystone-manage? 18:19:27 jamielennox, swift uses proxy logger, which using systemd underneath 18:19:31 ok 18:19:34 henrynash: ^ 18:19:40 There are some migrations in Newton that could (if operations continue during a rolling upgrade) leave the DB in an incorrect state during a multi-node migration 18:19:48 E.g.: https://bugs.launchpad.net/keystone/+bug/1596500 18:19:48 Launchpad bug 1596500 in OpenStack Identity (keystone) "Passwords created_at attribute could remain unset during rolling upgrade" [Undecided,New] - Assigned to Henry Nash (henry-nash) 18:19:51 making keystone-manage more user friendly is a good thing 18:19:57 I was thinking of annotating the migrations that have to be run as part of the "migrate completion" command with a decorator that would be used in sqlalchemy migrate scripts 18:20:06 if you do a normal keystone-manage db_sync, everything should work like today 18:20:15 but if you want to do a rolling upgrade, you would change this behavior by using new commands 18:20:42 xek: so, what is the order of ops 18:20:45 oh, this is rolling upgrades 18:20:49 xek: in fact there is even a bit more than that 18:20:52 xek: upgrade schema, upgrade code? 18:20:59 upgrade code, upgrade schema? 18:21:09 what is the required order to handle rolling upgrades 18:21:18 this still isn't clear to me, before we add new things 18:21:28 please set the order that is expected 18:21:33 i feel like we tried hashing that out at the midcycle for like 5 hours last time 18:21:35 i am against rolling upgrades without a clear target 18:21:43 there is at least one migration that wants to have non-nullable column, but leabes it nullable to ensure we don’t get other errors durinr a rolliog upgrade 18:21:47 and remain against it. 18:21:50 yea, i think we would need to do a fair bit of work code side to make keystone hadnle different running database schemas to get this to work 18:22:12 we support rolling upgrade today? 18:22:17 gyee: no 18:22:21 i don't think we do 18:22:24 yes we do 18:22:30 henrynash: no we don't 18:22:35 absolutely do not 18:22:40 gyee: if you cross your fingers it may work 18:22:41 code and schema must be in sync 18:22:44 notmorgan, well, you need to upgrade the code first to get the upgrade scripts for upgrading your schema 18:22:46 if we have cores saying we don't then we don't, since stuff is going to be approved that breaks it. 18:22:53 gyee: we support upgrades, but not rolling ones :) 18:23:02 if you're super lucky and do limited things it may work with old schemas 18:23:04 right, not rolling 18:23:18 xek: right i mean -- are you upgrading code everywhere, then schema? 18:23:35 xek: or is it "we make the schemas work in n-1 (or n-2) configurations"? 18:23:42 not for sql driver anyway 18:23:44 I am stunned…we agreed we WOULD be doing this at the summit…and I thought Steve you confirm that on IRc last week? 18:23:53 you could upgrade one system to the new code just to do the schema upgrade. 18:24:14 bknudson_: that is why i want to know the planned order - so we can ensure we're developing for the right cases. 18:24:20 i would say flow is (additive schema change; update code everywhere; contracting schema change) 18:24:22 henrynash: that was the goal, but it looks like we shot ourselves in the foot with migration 105 18:24:23 bknudson_: i'm not against rolling upgrades. 18:24:26 it's not well tested yet, and I agree that there could be some braking changes, but I think it's beneficial to keep rolling upgrades in mind while reviewing, to bootstrap the process 18:24:26 We're going to need rolling upgrade support otherwise our cloud is going to be useless. 18:24:31 so...rolling upgrade is a really reaaly hard thing 18:24:31 just against "adding things without the plan" 18:24:38 o/ 18:24:40 I think before we hit it, we would need a test that does this: 18:24:47 look at the current version of the code and database 18:24:55 dstanek: ++ you have to do code first otherwise you can end up with schemas newer than your code understands 18:24:56 haveing a migration-completion task would always be required to ensure rolling upgrades 18:24:57 then...upgrade one but leave the other at the old version 18:25:01 and make sure things still run 18:25:07 probabl "upgrade the database" 18:25:16 and that...I think...means no data migrations. 18:25:25 ayoung: partially agree, i just want a test to make sure the model matches the migration 18:25:39 I am with notmorgan, we need clear high-level goal on rolling upgrade first 18:25:42 jamielennox, so we always write code that *can* work with an older schema? 18:26:02 so..what would we upgrade from and to? 18:26:07 mitaka to newton? 18:26:12 ayoung: i'm pretty sure that's how this normally works, when you do a migration you need to ensure in code that there is still a path forward if the old schema is found in the database 18:26:13 n1 to n2? 18:26:20 ayoung, I'm working on grenade CI, which would downgrade the code to the last stable version and check if the schema is compatible and if resources written by the newer version don't cause problems 18:26:48 ayoung: I thought we agreed we would uprade the DB (maybe in conjunction with one node’s code), then upgrade the otehr nodes to newton 18:26:49 xek, so...what does Grenade consider as "last stable?" 18:26:50 xek: ++ good - so the plan is "upgrade schema" and roll code ? 18:27:02 can we hash this out at the mid-cycle? 18:27:05 ayoung: last release 18:27:07 dstanek: ++ 18:27:08 dstanek ++ 18:27:09 dstanek: ++ 18:27:12 +++++++= 18:27:14 dstanek ++ 18:27:16 notmorgan, so major releases only? 18:27:20 what do we do until the mid-cycle? no migrations? 18:27:20 dstanek: we can argue for another 5 hours, sure :) 18:27:22 mitaka to newton? 18:27:28 we had a long discussion about this at the last cycle 18:27:36 bknudson_: no migrations that clearly break rolling upgrade strategies? 18:27:40 stevemar: i hope not. i'll have some thought prepared already 18:27:41 bknudson_: additive should still be fine 18:27:43 lbragstad: but I don't know what the plan was 18:27:50 ok, additive works for me. 18:27:57 ayoung: i would never advocate otherwise. doing between other things is crazy 18:28:05 so we need a test that runs master against Mitaka's migrations 18:28:07 we pretty much came up with a list of "disruptive" schema changes 18:28:12 xek: are you going to be at the midcycle? 18:28:17 dolphm: bknudson_ we still have henrynash's bug he reported 18:28:28 That needs to be part of "check" 18:28:34 so, we are in good shape (with a simple complete-migration step) to support mitaka->newton rolling upgrades with DB first with one node, and then upgrade the other nodes 18:28:34 dolphm, I'll talk with my manager 18:28:57 henrynash: i believe so 18:29:06 stevemar: henrynash: the bug does not need to be resolved immediately 18:29:15 dolphm: ++ 18:29:17 stevemar: henrynash: it's just a red flag, there's nothing broken 18:29:24 dolphm: nope, but it just leaves a hole in the plan 18:29:38 dolphm: anyway, i get what you mean 18:29:42 dolphm: and we are about to create another hole with https://review.openstack.org/#/c/328447/ 18:29:58 it can wait for the midcycle, at least. we need a whiteboard for this again :( 18:30:09 similar thing, wwhere we need a migration-complete task to tidy things up 18:30:10 henrynash: ready for your second topic? 18:30:11 stop creating them holes! 18:30:13 yep 18:30:31 #topic Domain hierarchies as an alternative to relaxing project name constraints or project hierarchical naming 18:30:41 ok, last attempt! 18:30:57 henrynash: domains must remain globally unique or we break api contract. 18:31:02 so took dolphm/notmorgans suggetsions about domains 18:31:12 notmorgan: they do stay unique 18:31:18 henrynash: name wise. but i'll leave off hierarchy complaints 18:31:27 henrynash: and let it happen if other folks support it 18:31:32 https://review.openstack.org/#/c/332940/ 18:31:49 as long as they stay at the top. i still will say i dislike it and it opens a LOT of security concerns 18:32:05 and a dependant spec of: https://review.openstack.org/#/c/334364/ (for simplicity) 18:32:14 notmorgan: what do you mean “on top” 18:32:20 domains cannot be under projects 18:32:26 domain->domain ok 18:32:27 notmorgan: ++ 18:32:30 domain->project->domain no 18:32:36 agreed 18:32:43 notmorgan: agreded. spec confirms that 18:32:55 and i'm going to ask for a thread analysis on adding this + UX review. 18:33:01 so string naming on domains means we can compose domain names out of path segments? 18:33:02 threat analysis* 18:33:08 ayoung: yes 18:33:18 i don't think i like this - it may just be my model but this is never how i thought domains should be used 18:33:24 henrynash: i've wanted this since we were discussing HMT in paris 18:33:25 helll, I think we can get rid of projects then 18:33:30 jamielennox: i agree with you. 18:33:39 jamielennox, we should never have had domains 18:33:48 but lets kill projects and just use domains 18:33:48 ayoung: uhm.. except projects are not named globally unique 18:33:49 i mean we're going heavily down the projects everywhere path and now we kick up domain usage 18:33:51 projects are legacy 18:33:58 ayoung: and domains MUST be globally uniquely named 18:34:02 regardless of hierarchy 18:34:12 lets just rename them back to tenants 18:34:13 notmorgan, that is not what I just asked 18:34:14 jamielennox: problem is that we pushed down the wrong path for too long 18:34:15 and from a UX perspective it will result in a LOT of cases where the USER_DOMAIN_ID and PROJECT_DOMAIN_ID are different and that will just confuse everyone 18:34:17 what I just asked is 18:34:18 jamielennox: well, their all just proejcts 18:34:20 gyee: lol 18:34:21 so string naming on domains means we can compose domain names out of path segments? 18:34:36 so if D1->D2 then the name is D1/D2 18:34:44 name is unique, but composed out of sections 18:34:50 jamielennox: ugh, that would not be nice if USER_DOMAIN_ID was different than PROJECT_DOMAIN_ID 18:34:51 jamielenniox: we allow that today, you can have assignements across domains 18:34:53 so we keep the uniqueness, but add the ability to compose 18:35:10 henrynash: the number of times that is used approaches 0 18:35:10 ayoung: ++ 18:35:22 henrynash: you can do it today yes, but most people i talk to are surprised by that and no-one really does it 18:35:33 jamielennox: ++ 18:35:36 Don't you have users in ldap and projects in SQL? 18:35:40 jamielennox: I suspect that is true 18:35:49 partially because it's a difficult workflow to even find projects in other domains to assign permissions on 18:35:54 stevemar: i think multi-domain backends is going to get weird. 18:35:59 jamielennox, that is because we've made a gawdoffal mess out of it 18:36:00 but *shrug* 18:36:13 jamielennox, I've been asked about it 2-3 times in the last couple of weeks 18:36:54 also for every argument that domains can just be COMPANY-DIVISION-PROJECT to keep them unique - the exact same argument applies to project names 18:37:04 i still think just simply letting other services (asking other services) to be domain aware is better than weding in more HMT things 18:37:05 tempest needs more multi-domain testing, and also testing with domain_specific_backends=True 18:37:15 jamielennox, I agree, but there is the whole "you promisd not to do that" 18:37:25 we didn't promise, however, anything about hierarchial domains 18:37:32 so they are still fair play 18:37:33 oh god, yea, and domain specific backends would have to play into this 18:37:48 so I guess my ask here is that provided I can work out the corner cases, are we ok conceptually and compatibility-wise with this (somcething which I don’t think we true for either the other two proposals) 18:38:26 henrynash, you know you have my support. I think we need an affirmation from dolphm and notmorgan. 18:38:29 but I don’t want to keep working thorugh this if we are going to blowit out the water on a conceptual bases 18:38:41 i will not block this, i would still rather not to do hierarchical domains and just ask other projects to allow grouping of domains etc for quota 18:39:00 this seems like we're adding a LOT of extra cruft for supporting quota groupings etc. 18:39:03 notmorgan, but you agree it does not break any of our current promises? 18:39:24 notmorgan, I see it as us finally doing what we should have done in the first place 18:39:27 henrynash: i'm -1 just from a conceptual level, i don't think this is doing things the right way 18:39:40 ayoung: if domains remain globally unique name wise -- and all current APIs return the same data 18:39:43 we should not have two abstractions (domains nad projects) and projects should have been hierarchical from the get go 18:39:44 we aren't breaking promises 18:39:49 my thoughts echo notmorgan's 18:39:55 we got some use cases in the last summit, like get ceilometer metrics in the hierarchy... this is something that just adding domain concept in other services will not be enoght... ++ for henrynash's idea 18:39:59 domains also should not be containers for VMs. 18:40:03 jamielennox what are the things you don't think are right? 18:40:10 that is specific to project(s) 18:40:14 jamielennox: conceptually a domain is a name-space….why shouldn’t customer have more tahn one? 18:40:19 notmorgan, I can accept that for now 18:40:35 henrynash: they can have more than one, just another top level one 18:40:42 henrynash: why does it need to be wedged under another domain? 18:40:46 henrynash: that is my question 18:40:56 notmorgan: so they get one bill? 18:40:56 henrynash: rather than just giving them another top-level domain 18:41:04 stevemar: -2 for that reasoning 18:41:08 stevemar: :P 18:41:12 lbragstad, henrynash: we've had people now for a while using things like the v3policy sample thing to do things like domain administration - i think that it is too big a conceptual leap to tell people now that domains can be used everywhere 18:41:14 (i was just joking) 18:41:16 notmorgan: since otherwise it would have to unique across all other top level domains…taht just won’t work in public cloud 18:41:34 henrynash: why can't it be unique? 18:41:57 notmorgan: plus, we need to allow a customer’s admin to crud them without reference to the cloud admin 18:42:03 i think it will confuse everyone, and there are things like domain specific backends and users owned by domains which make this too hard to sell 18:42:10 notmorgan: I don’t know how to do that at the top level 18:42:17 henrynash: so assign a customer "admin" that has access across domains 18:42:49 notmorgan: but the cloud admin would have to create each domain…..that won’t scale in public cloud 18:42:49 henrynash: and crm solutions can map multiple domains to a single account 18:43:05 henrynash: we're adding tons of cruft in keystone that is a CRM issue. 18:43:27 how does this work with domain-specific backends? 18:43:38 notmorgan: how would you avoid the cloud admin being involved? 18:43:50 I haven't read the spec 18:43:52 wouldn't you ask for another domain via the CRM solution? 18:44:01 I could see a pattern, though, where people used a common name for projects and used domain nesting for ecerything namespacey 18:44:04 bknduson_: don’t know, haven’t thought that bit though yet 18:44:16 like /d1/d2/d3/pro and /d4/d5/d6/pro 18:45:00 * notmorgan shrugs. 18:45:05 i think this is an example of putting the cart before the horse, we are making a lot of assumptions about how things are deployed and how operators do things. i'd prefer to wait and get feedback, like, will they actually use this? 18:45:12 i think dolphm and I are mostly on the same point for this. 18:46:30 next topic? 18:46:40 the conversation has slowed down 18:46:44 not sure where that leves this…you saying punt? 18:46:46 is it me? 18:46:57 i think from a business logic standpoint, this could be automated as is. public clouds have extra mechanisms and toolchains that private clouds dont 18:46:57 samueldmq_, you are up 18:47:01 and need it. 18:47:20 ayoung: just waiting for topic to change 18:47:24 henrynash: i think you need more advocates :P 18:47:31 but like i said, i wont block this - i am voicing a general dislike of it, but will support as long as our APIs and prmises remain 18:47:37 the last 2 topics are short 18:47:48 notmorgan: understand 18:47:59 ok 18:48:01 i'm still -1 but won't fight it if overruled 18:48:25 my head still hurts 18:48:28 i think the conceptual model is wrong 18:48:31 stevemar, change the topic. 18:48:36 gyee: stay of the babysham 18:48:43 hah 18:48:48 and every time we hack around something to get it to work we end up supporting it forever 18:48:50 i think we need buy in from ops, i want to make sure it'll be useful for someone before adding a metric ton of code 18:49:01 stevemar: ok, fair point 18:49:16 stevemar: ++ 18:49:19 handwavey usecases are OK for smaller features :P 18:49:27 #topic Migrate API Ref Docs 18:49:32 samueldmq_: ^ 18:49:49 samueldmq, nice work there 18:49:52 so... 18:49:59 you just lookinbg for reviews? 18:50:00 as everyone knows, we are migrating api-ref docs to our repo 18:50:13 I've put this to gather some love on those patches 18:50:18 ayoung: yes 18:50:22 ++ 18:50:26 I'll start looking 18:50:26 it's basically migrating the docs from api-ref repo 18:50:40 makes total sense. shoulda happened yeas ago 18:50:43 and then followup work will include updating them/making them better 18:51:11 samueldmq: it's been a couple weeks, but the patches i've looked at have been a little challenging to review because there's far more going on than just a copy paste from one repo to another 18:51:16 it'd be nice to get it merged, so we keep working on improving them 18:51:27 the gate-keystone-api-ref link should work, but it's not working on https://review.openstack.org/#/c/322131/ for some reason 18:51:40 and it wasn't obvious wherever i stepped in how to review the (published / built) result of the change overall 18:51:48 bknudson_: it's old 18:51:53 dolphm: it's basically copy&paste and minor formatting (such as removing lots of blanklines) 18:51:54 bknudson_: 3 weeks now 18:52:10 stevemar: can we just recheck it then? 18:52:34 dolphm: just just rebase 18:52:36 dolphm, can we accept the changes in with some sort of caveat that they might have drifted from the imp? that is probably true now anyway of the old docs 18:53:03 bknudson_: I will check on that with infra if ther's something wrong in our gate 18:53:12 ayoung: would be nicer for the first commit to just be a direct copy and then we can look at cleanups individually 18:53:16 dolphm: rebased 18:53:32 samueldmq: it's just an old build, they remove the results after a few days 18:53:43 Also, I posted comments on https://review.openstack.org/#/c/322131/ and they weren't addressed... not sure why I +1 rather than +2. 18:53:44 bknudson_: the generated results do work 18:53:49 * samueldmq nods 18:53:51 This commit polish the results from the conversion and migrate the v2 docs to our repository under 'api-ref/source' directory. Missing parameters definitions were added. Polishing the generated RST files include: - Removing unnecessary blank lines; - Removing empty references. Polishing the generated RST files do not include: - Modifying their content; - Modifying file names; - Wrapping lines at the maximum of 79 chars. Updating the documentation 18:53:51 will be done after this migration step. 18:53:58 I think that is acceptable 18:54:05 need that to pass the parser, right? 18:54:31 bknudson_: I will look at it, sorry had missed your comments 18:54:31 would the team be up for a 2 day sprint to get the API ref matching what we have in keystone-specs? 18:54:56 stevemar, um...no 18:55:04 i think the nova team did something similar 18:55:07 ayoung: :( 18:55:09 2 days polishing all of the v2 docs? 18:55:12 the nova team did a sprint and it works pretty well 18:55:13 stevemar: I would :) 18:55:13 I can't justify that 18:55:26 ayoung: it would be the v3 docs and v3 "extensions" too 18:55:41 ayoung: mostly are v3 docs 18:55:57 any other folks interested? 18:55:58 stevemar: that'd be awesome to have 18:56:00 guess what -- everyone still uses v2. 18:56:09 is there really 2 days worth of work there for the whole team>? 18:56:11 bknudson_: it's true 18:56:29 I can split the migration in 2: i) move as it is (only fixing things that break) then ii) polish it (remove blank lines, etc) 18:56:33 we should spend time on this - it is important. 18:56:36 i'm happy to do reviews 18:56:42 bknudson_, not based on RH product. We are pushing v3 due to Federation needs 18:56:44 and domains 18:56:57 as long as it's not next week :P 18:56:58 * samueldmq nods 18:57:10 3 minutes...and a related topic... 18:57:19 dolphm: okay i'll propose one for the week after next 18:57:30 #topic Troubleshooting Doc 18:57:30 ayoung: we can't only be concerned with single vendors involved in openstack we have to support all users [and this impacts end users, who are more important than specific vendors] 18:57:30 what about the week before the micycle? 18:57:39 ayoung: ^ 18:57:46 lbragstad: i'll send something out to mailing list 18:57:50 lbragstad: that's the week steve is referring to 18:57:51 stevemar ++ 18:57:52 notmorgan, just saying that "not everyone uses v2.0" 18:57:53 i'll propose a day 18:58:05 dolphm: indeed it is! 18:58:07 stevemar, I'll be on Vacation that week 18:58:15 ayoung: perfect timing! :) 18:58:18 heh 18:58:27 lets get started on dolphm's keystone doctor already! forget troubleshooting docs 18:58:30 time to automate 18:58:32 OK...last topic 18:58:34 gyee: lol 18:58:36 no need to changethe topic 18:58:39 i did 18:58:40 Troubleshooting docs 18:58:40 gyee ++ 18:58:41 ayoung: 2 minutes on your topic 18:58:53 ayoung: you're up bud 18:58:54 We have an etherpad....what should we do next 18:59:00 #link https://etherpad.openstack.org/p/keystone-troubleshooting 18:59:11 let's populate it and move it to docs 18:59:12 I think that Keystone troubleshooting is the #1 intterrupt I recewive 18:59:13 ayoung: get an officially published doc 18:59:16 can we make it a list of things we should check with keystone-doctor? 18:59:22 ayoung: ++ 18:59:27 lbragstad: both can co-exist 18:59:30 do we want that in the Keystone tree? And as a single doc? 18:59:32 ayoung: "why 401?' 18:59:33 convert to rst and get it on docs :) 18:59:34 stevemar cool 18:59:39 It seems like a very fluid 18:59:44 ayoung: or we get another repo 18:59:48 and constantly changing doc 18:59:49 which would make a lot of sense 18:59:57 keeping in docs is OK to me 18:59:57 the troubleshooting doc uses the keystone cli. 19:00:01 could it ... go in the specs repo? 19:00:11 lot of them can be automated 19:00:18 I know that sounds weird, but specs always seemed like "the pre-docs" repo 19:00:30 lbragstad: maybe file wishlist bugs tagged with "doctor" or something? 19:00:39 we're out of time! 19:00:41 (time) 19:00:42 I wanted them on the Wiki origianlly, as that is the right degree of responsieness, but Wiki dead 19:00:43 dolphm yeah - that would be a good idea too 19:00:43 #endmeeting