*** zhurong has joined #openstack-meeting-cp | 01:14 | |
*** tovin07_ has quit IRC | 01:16 | |
*** tovin07 has joined #openstack-meeting-cp | 02:16 | |
*** tovin07_ has joined #openstack-meeting-cp | 02:18 | |
*** topol has quit IRC | 02:46 | |
*** zhurong_ has joined #openstack-meeting-cp | 02:53 | |
*** zhurong has quit IRC | 02:55 | |
*** topol has joined #openstack-meeting-cp | 03:14 | |
*** beekhof has quit IRC | 03:47 | |
*** prateek has joined #openstack-meeting-cp | 04:31 | |
*** prateek has quit IRC | 04:53 | |
*** prateek has joined #openstack-meeting-cp | 04:54 | |
*** jgriffith is now known as jgriffith_away | 04:54 | |
*** gouthamr has joined #openstack-meeting-cp | 06:06 | |
*** zhurong_ has quit IRC | 08:04 | |
*** zhurong has joined #openstack-meeting-cp | 08:04 | |
*** notmyname has quit IRC | 09:38 | |
*** notmyname has joined #openstack-meeting-cp | 09:40 | |
*** tovin07_ has quit IRC | 10:02 | |
*** zhurong has quit IRC | 10:02 | |
*** brault has joined #openstack-meeting-cp | 10:17 | |
*** lyarwood_ has joined #openstack-meeting-cp | 10:18 | |
*** lyarwood has quit IRC | 10:19 | |
*** anteaya has quit IRC | 10:19 | |
*** notmyname has quit IRC | 10:19 | |
*** brault_ has quit IRC | 10:19 | |
*** olaph has quit IRC | 10:19 | |
*** homerp has quit IRC | 10:19 | |
*** bswartz has quit IRC | 10:19 | |
*** dhellmann has quit IRC | 10:19 | |
*** dhellmann has joined #openstack-meeting-cp | 10:23 | |
*** homerp has joined #openstack-meeting-cp | 10:25 | |
*** vkmc has quit IRC | 10:25 | |
*** Daviey has quit IRC | 10:25 | |
*** notmyname has joined #openstack-meeting-cp | 10:25 | |
*** vkmc has joined #openstack-meeting-cp | 10:27 | |
*** anteaya has joined #openstack-meeting-cp | 10:31 | |
*** Daviey has joined #openstack-meeting-cp | 10:36 | |
*** olaph has joined #openstack-meeting-cp | 10:36 | |
*** lyarwood_ is now known as lyarwood | 11:20 | |
*** sdague has joined #openstack-meeting-cp | 11:27 | |
*** lyarwood is now known as lyarwood_ | 12:08 | |
*** prateek has quit IRC | 12:28 | |
*** lyarwood_ is now known as lyarwood | 13:03 | |
*** lyarwood is now known as lyarwood_ | 13:08 | |
*** gouthamr has quit IRC | 13:17 | |
*** xyang1 has joined #openstack-meeting-cp | 13:30 | |
*** lamt has joined #openstack-meeting-cp | 13:41 | |
*** tongli has joined #openstack-meeting-cp | 14:07 | |
*** bswartz has joined #openstack-meeting-cp | 14:20 | |
*** JASON___ has joined #openstack-meeting-cp | 14:54 | |
*** MarkBaker has joined #openstack-meeting-cp | 14:57 | |
*** prateek has joined #openstack-meeting-cp | 14:59 | |
*** Rockyg has joined #openstack-meeting-cp | 14:59 | |
*** gema has joined #openstack-meeting-cp | 15:00 | |
topol | #startmeeting interop_challenge | 15:00 |
---|---|---|
openstack | Meeting started Wed Nov 30 15:00:41 2016 UTC and is due to finish in 60 minutes. The chair is topol. Information about MeetBot at http://wiki.debian.org/MeetBot. | 15:00 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 15:00 |
*** openstack changes topic to " (Meeting topic: interop_challenge)" | 15:00 | |
openstack | The meeting name has been set to 'interop_challenge' | 15:00 |
markvoelker | o/ | 15:00 |
gema | o/ | 15:00 |
Rockyg | o/ | 15:00 |
topol | Hi everyone, who is here for the interop challenge meeting today? | 15:00 |
topol | The agenda for today can be found at: | 15:00 |
topol | #link https://etherpad.openstack.org/p/interop-challenge-meeting-2016-11-30 | 15:00 |
topol | We can use this same etherpad to take notes | 15:01 |
*** skazi_ has joined #openstack-meeting-cp | 15:01 | |
MarkBaker | o/ | 15:01 |
skazi_ | o/ | 15:01 |
JASON___ | Hi, jason from huawei | 15:01 |
tongli | o/ | 15:01 |
dmellado | o/ hi guys | 15:01 |
topol | #topic review action items from previous meeting | 15:02 |
topol | #link http://eavesdrop.openstack.org/meetings/interop_challenge/2016/interop_challenge.2016-11-16-15.00.html | 15:02 |
*** openstack changes topic to "review action items from previous meeting (Meeting topic: interop_challenge)" | 15:02 | |
topol | all, please use #link https://etherpad.openstack.org/p/interop-challenge-postmortem for lessons learned doc | 15:02 |
topol | all, please add all you can to it | 15:02 |
topol | all, sections include tooling, networking, provisioning, metadata, etc. | 15:02 |
*** rarcea_ has joined #openstack-meeting-cp | 15:02 | |
topol | I went an looked at the document I don't believe many of the sections we were thinking about have been added | 15:03 |
topol | so if you have content to contribute for this doc please do | 15:03 |
topol | so let's keep this action running | 15:04 |
tongli | should that be part of the new repository and get reviewed , then merge? | 15:04 |
tongli | I mean even the document. | 15:04 |
topol | #action all, please use #link https://etherpad.openstack.org/p/interop-challenge-postmortem for lessons learned doc and add content | 15:04 |
topol | tongli yes, we will add it to the repo when repo is ready | 15:05 |
topol | #action tongli to migrate doc to repo when ready | 15:05 |
luzC | o/ | 15:05 |
dmellado | hiya luzC ;) | 15:05 |
topol | tongli when you have successfully moved it mark it at the top as been moved | 15:06 |
topol | we'll freeze it then | 15:06 |
*** garloff has joined #openstack-meeting-cp | 15:06 | |
topol | next item: | 15:06 |
tongli | @topol, got it. | 15:06 |
topol | topol to add an elevator pitch to #link | 15:06 |
topol | this was done | 15:06 |
topol | next item | 15:07 |
topol | all add to etherpad suggestions for work items. After one week. topol will create a doode poll and send out to defcore list | 15:07 |
tongli | the repository is there. | 15:08 |
topol | So I check and did not seen anything added besides cloud foundry and NFV. Did I miss any if these? Did I look in the wrong place. I held off on the doodle poll | 15:08 |
tongli | we just need to have an agreement on the structure, then we can start doing it. | 15:08 |
*** MarkBaker has quit IRC | 15:08 | |
topol | tongli, excellent. But lets cover that at the end of the agenda | 15:08 |
topol | But back to the suggested work items. Did I miss any? | 15:09 |
markvoelker | Looks like we said we were going to add them to https://etherpad.openstack.org/p/interop-challenge-meeting-2016-11-16 | 15:09 |
markvoelker | And you got those | 15:09 |
topol | markvoelker. yeah thats what I thought but I needed a sanity check :-) | 15:10 |
topol | if we only have 2 do we need a doodle poll? | 15:10 |
markvoelker | One thing I see in the minutes from last time that doesn't look like it's in the list yet: | 15:10 |
tongli | there are three | 15:10 |
*** prateek has quit IRC | 15:10 | |
markvoelker | "it would be good to review the user survey for what people are interested in, and what they're running on top of OS today" | 15:10 |
tongli | NFV, Kubernetes, CF | 15:10 |
markvoelker | Not sure we've actually done that? | 15:10 |
luzC | topol quick question... are we still joining app catalog? | 15:10 |
topol | luzC good question | 15:11 |
Rockyg | ++ markvoelker | 15:11 |
topol | markvoelker I think you are correct that that stepwas not done | 15:11 |
topol | Anyone have time to do that? | 15:12 |
markvoelker | I can probably do that...shouldn't take more than 30 minutes or so | 15:12 |
topol | markvoelker ok great. So how bout you do that and see if anything should be added to our list of 3 items and then we send out the doodle poll? | 15:13 |
markvoelker | Sure | 15:13 |
topol | great. Thanks! | 15:13 |
topol | #action markvoelker to review user survey to look for more possible work items to add to doodle poll | 15:14 |
topol | luzC on your question let's hold that until we talk about our new repo | 15:14 |
luzC | topol ok, I was asking because I noticed app-catalog only has heat, tosca and murano templates, and glance images, not ansible/terraform | 15:14 |
luzC | topol thats ok, let's talk about it after repo is in place | 15:15 |
topol | Ok let's jump to a todays agenda item | 15:15 |
dmellado | yeah | 15:15 |
topol | luzC +++ | 15:15 |
topol | #topic New Meeting time and IRC channel | 15:16 |
*** openstack changes topic to "New Meeting time and IRC channel (Meeting topic: interop_challenge)" | 15:16 | |
topol | So i was informed that where we hold this meeting #openstack-meeting-cp cannot be a long term home for us :-( | 15:16 |
dmellado | :\ | 15:16 |
topol | So we have to move to one of the other channels. I did not see a channel avail at this timelsot. And frankly determing what channels are availabel at what timelsots is not my strong skill | 15:17 |
gema | topol: create one | 15:17 |
gema | #openstack-forall | 15:17 |
gema | x) | 15:17 |
topol | gema can we??? | 15:18 |
gema | of course | 15:18 |
gema | and we can put the bot on it | 15:18 |
gema | and be done with it | 15:18 |
topol | My irc skills are weak | 15:18 |
gema | choose a name and join the channel | 15:18 |
gema | that just creates it | 15:18 |
gema | then we have to put a couple of patches in infra to make the bot enter | 15:18 |
gema | and we can hold the meetings there | 15:18 |
topol | I would love our own channel at this timeslot. I vageuly recall some issues with doing that like losing the meeting bot??? | 15:18 |
gema | then let's choose a name | 15:18 |
gema | and we can join the channel and get the bot in for next week | 15:19 |
topol | gema I really like that plan | 15:19 |
dmellado | +1 | 15:19 |
tongli | @gema, if that is the case, can we keep the same time but a new channel? | 15:19 |
dmellado | #openstack-interop | 15:19 |
dmellado | ? | 15:19 |
topol | Can you create the channel and configure the bot thingy :-) | 15:19 |
gema | tongli: sure, it'd be our channel | 15:19 |
gema | tongli: I can carve some time for that, no problem | 15:20 |
dmellado | gema: a gerritbot would also be awesome xD | 15:20 |
gema | tongli: but tell me the name | 15:20 |
gema | dmellado: do you know how to do that? | 15:20 |
gema | xD | 15:20 |
gema | dmellado: we'll figure it out | 15:20 |
tongli | @gema, sounds like a plan and an action item for @gema. | 15:20 |
topol | #openstack-interop sounds like a great name to me | 15:20 |
dmellado | heh, well, I did at some point | 15:20 |
dmellado | ao I can lend a hand | 15:20 |
dmellado | xD | 15:20 |
gema | topol: isn't that the new defcore channel? | 15:20 |
dmellado | so | 15:20 |
gema | that's taken | 15:20 |
Rockyg | openstack-interop is already there. defcore is transitioning to it | 15:20 |
luzC | gema I think so | 15:20 |
dmellado | oh, true! | 15:20 |
gema | topol: or the other thing we could do is ask defcore if they let us hold the meetings on their channel | 15:20 |
topol | can we reuse or should we be slightly different | 15:21 |
gema | and use that one | 15:21 |
gema | topol: up to us, reusing sounds good, we can ask later today if using that channel is ok | 15:21 |
topol | k, so lets have a backup name just in case | 15:21 |
gema | markvoelker: what do you think? | 15:21 |
tongli | if openstack-interop is already there, should we just take the time slot? seems a bit easier? | 15:21 |
topol | tongli I agree if we can get that aproved | 15:21 |
gema | topol: it is up to the entire group, not just up to me | 15:21 |
tongli | our repo is named interop-workloads. | 15:21 |
gema | so we need to ask | 15:21 |
tongli | we can use that as the channel? | 15:22 |
gema | tongli: we could | 15:22 |
tongli | too long maybe | 15:22 |
tongli | or just openstack-workload | 15:22 |
tongli | openstack-workloads | 15:22 |
dmellado | +1 on openstack-workloads | 15:22 |
markvoelker | I think it's probably fine to use the #openstack-interop channel from a DefCore perspective, but note that infra has frowned on holding meetings in non-meeting channels in the past | 15:22 |
luzC | +1 openstack-workloads | 15:22 |
topol | gema I was thinking we reuse openstack-interop and if not openstack-interop-workloads | 15:22 |
topol | I kinda liked the interop in the name | 15:23 |
gema | topol: ok | 15:23 |
Rockyg | meeting is right after this one if you wantto get a quick answer | 15:23 |
gema | topol: so I will ask the interop folks in the meeting later today if they are ok with it | 15:23 |
gema | and then I will also check with infra | 15:23 |
gema | to make sure we are legal | 15:23 |
topol | gema, that would be great | 15:23 |
gema | topol: we are already using the mailing from defcore | 15:23 |
tongli | @topol, I do like that in the name as well. so channel openstack-interop as the first option, if no good, we go with openstack-workloads | 15:23 |
tongli | ? | 15:23 |
topol | do we go with openstack-workloads or openstack-interop-workloads | 15:24 |
topol | as the backup | 15:24 |
gema | topol: do we have a grace period, i.e. whilst we figure it out, can we stay here? | 15:24 |
topol | gema, YES we have a grace period | 15:25 |
gema | topol: ok | 15:25 |
topol | Our landlord is a benevolent one. We wont be evicted immediately :-) | 15:25 |
gema | :) | 15:25 |
topol | does anyone hate the backup name ofopenstack-interop-workloads? | 15:26 |
topol | if everyone hates that for being too long openstack-workloads is fine | 15:26 |
topol | gema If no opinions from anyone you get to choose since you are doing the hard work of setting it up | 15:27 |
tongli | @topol, I have no problems with the name, but it is very long. | 15:27 |
gema | topol: the hardest part is talking to people, the rest is easy | 15:27 |
Rockyg | infra folks areawful nice | 15:28 |
*** MarkBaker has joined #openstack-meeting-cp | 15:28 | |
tongli | @gema, @topol, go with openstack-workloads, that is what our repo name is. | 15:28 |
gema | tongli: ok | 15:28 |
topol | tongli makes sense | 15:28 |
topol | and tongli now gets the work item... JUS KIDDING | 15:29 |
gema | so the plan is asking openstack-interop if we can use the channel and check with infra if it is ok to hold meetings in a non-meeting channel | 15:29 |
gema | if any of those two is a no, go for our own channel | 15:29 |
topol | gema +++ | 15:29 |
luzC | +1 | 15:30 |
topol | #action gema ask openstack-interop if we can use the channel and check with infra if it is ok to hold meetings in a non-meeting channel, otherwise go for own channel openstack-workloads | 15:30 |
persia | One advantage to having meetings in meeting channels is that there is a large passive audience, which can be beneficial when one encounters cross-project activities. -meeting is especially popular, making it one of the preferred venues. | 15:30 |
gema | persia: the problem is timeslots | 15:30 |
persia | yes :( | 15:30 |
Rockyg | also, ttx recently reviewed open slots on meeting channels. He could give us the open options | 15:30 |
gema | Rockyg: wanna try to get us this slot on one of the meeting channels? | 15:30 |
gema | maybe that should be option 1 | 15:31 |
persia | http://git.openstack.org/cgit/openstack-infra/irc-meetings/tree/meetings also has all the meetings, if you want to do your own review | 15:31 |
topol | gema, I belive hogepoge and I did that. I dont think option 1 was possible | 15:31 |
Rockyg | this slot and 1400utc seems *really* popular | 15:31 |
topol | Rockyg +++ | 15:31 |
gema | ok | 15:31 |
persia | #openstack-meeting appears empty currently | 15:31 |
topol | persia check the tree just in case. some meetings are every other week | 15:32 |
topol | that's what makes it messy | 15:32 |
tongli | I would like to have meeting in openstack-meeting channel if that is also possible. | 15:32 |
gema | agreed | 15:32 |
topol | who is good at determining options? | 15:33 |
Rockyg | No one's on it.... lemme check calendar | 15:33 |
topol | risk is we move to a time that is horrible for folks at some location in the world | 15:33 |
tongli | @persia,@gema, if a channel is crowded, we can easily get kicked out if we run a bit long. | 15:33 |
topol | need to be careful if we move to a new timeslot | 15:33 |
gema | tongli: of course, if we go for a meeting channel we have to be on time | 15:33 |
Rockyg | yup. openstack-meeting is available now. It would go an hour later a daylight savings time | 15:34 |
tongli | @topol, @gema, yeah, that is probably a good thing so that we do not run over. | 15:34 |
topol | Rockyg is it avail every week at this time? | 15:34 |
Rockyg | lemme double check that | 15:34 |
topol | I think someone just has to submit a patch to grab the timeslot | 15:34 |
*** gouthamr has joined #openstack-meeting-cp | 15:35 | |
persia | topol: Yes. | 15:35 |
Rockyg | nope. neutron dvr has it on the other week. | 15:35 |
topol | Rockyg :-( | 15:35 |
Rockyg | But, lemme ask ttx.... | 15:36 |
gema | ok, so Rockyg checks the meeting channels see if there is a slot | 15:36 |
Rockyg | I've got a ping out to him | 15:36 |
gema | if not, we continue down the options | 15:36 |
gema | Rockyg: let me know the outcome | 15:36 |
topol | #action Rockyg to ask ttx if we can grab #openstack-meeting at this timeslot | 15:37 |
Rockyg | I'll put the channels available in todays etherpad | 15:37 |
gema | Rockyg: thanks! | 15:37 |
tongli | looks like we are not changing meeting time, right? | 15:37 |
tongli | all the options are to keep the current time? | 15:37 |
gema | yep | 15:37 |
topol | my pref is to not change the meeting time. Its really hard to get all you folks a timeslot that works cuz we all have day jobs :-) | 15:37 |
gema | agreed | 15:38 |
topol | #agree whatever option we choose we keep the same timeslot | 15:38 |
tongli | ok. Wednesday 1500 UCT, | 15:38 |
topol | Ok, let's see what else is on the agenda | 15:38 |
tongli | when day light saving time changes, we change that as well. | 15:38 |
topol | #topic new repo | 15:38 |
*** openstack changes topic to "new repo (Meeting topic: interop_challenge)" | 15:38 | |
topol | tongli did this merge? | 15:38 |
topol | do we have a new repo? | 15:39 |
tongli | @topol, I think so. | 15:40 |
tongli | the patch was merged. | 15:40 |
garloff | where? | 15:40 |
tongli | thanks to Chris. | 15:40 |
persia | Rockyg: Hrm? I thought neutron-dvr was in #openstack-meeting-alt | 15:40 |
dmellado | did we get to the new repo structure? | 15:40 |
tongli | https://github.com/openstack/interop-workloads | 15:41 |
Rockyg | persia, you're right. massively distributed clouds has it | 15:41 |
garloff | tongli: thx | 15:41 |
tongli | @dmellado, we have not discussed that item yet | 15:41 |
tongli | I put up a structure in last meeting etherpad. | 15:41 |
tongli | we need to confirm that is what we want. | 15:41 |
topol | Yay #link https://github.com/openstack/interop-workloads is our new repo | 15:41 |
topol | yes, #topic repo structure | 15:41 |
Rockyg | and just an fyi, you can link the meetings file to google calendar and it will display properly. | 15:42 |
topol | suggested strawman is found at #link https://etherpad.openstack.org/p/interop-challenge-meeting-2016-11-16 | 15:42 |
topol | so a suggestion was | 15:43 |
topol | Can't we just create the repo using cookiecutter, just as in http://docs.openstack.org/infra/manual/creators.html#preparing-a-new-git-repository-using-cookiecutter | 15:43 |
topol | and adapt as needed? | 15:43 |
tongli | use the generic tool name at the top, then api tool, second, then OS | 15:44 |
tongli | is it too deep? | 15:44 |
topol | tongli use cloudfoundry as an example | 15:44 |
tongli | or the platform (debian, redhat,) should be part of the scripts, with a lot of checks? I personally do not really like that. | 15:44 |
topol | what would it look like | 15:44 |
topol | Ideally at the top level are workloads, cloudfoundry,k8s, etc | 15:45 |
Rockyg | yeah, we *don't* want platform. That's what we are removing from the equation ;-) | 15:45 |
topol | Rockyg +++ | 15:45 |
topol | and you go into cloudfoundry and a readme says how to run the workload | 15:46 |
topol | same for K8s | 15:46 |
tongli | should be like this /ansible/shade/cloudfoundry? | 15:46 |
topol | same for NVF app1 NVF app2 | 15:46 |
dmellado | one thing that we should totally do straightforward this time | 15:46 |
tongli | if we do OS API, then /ansible/osapi/cloudfoundry? | 15:46 |
dmellado | is the creation of tox and requirements | 15:46 |
dmellado | I wouldn't like what happened last time with the package version mumble around to be repeated | 15:47 |
Rockyg | ++ dmellado | 15:47 |
topol | I was thinking cloudfoundry/ansible and cloudfoundry/otherautomationtool | 15:47 |
tongli | @dmellado, that will be nice, not necessarily required but we can add which does not affect the structure, right? | 15:47 |
dmellado | that's why I do suggest to adapt directly from the standard openstack project | 15:47 |
topol | is that horrible? | 15:47 |
dmellado | tongli: well, it will be, kinda | 15:48 |
Rockyg | I agree with dmellado | 15:48 |
tongli | @topol, I am thinking if a company runs these work load, most likely they will prefer a tool. | 15:48 |
topol | dmellado can you give an example | 15:48 |
tongli | for example, I will prefer ansible, then I can grab ansible at the top level not care too much about other top directories. | 15:48 |
dmellado | topol: sure, so if you use cookicutter, it'll create a predefined structure | 15:48 |
Rockyg | , but perhaps we could use cookie cutter with a revised template? change some names but keep the structure? | 15:48 |
dmellado | that we can adapt it later | 15:49 |
tongli | if do what you suggested, then I will have to dig a bit deeper, just my experience. | 15:49 |
dmellado | Rockyg: yeah, that was my idea | 15:49 |
topol | what about tongli's idea of having the automation tool at the top level? | 15:49 |
Rockyg | tongli, just one or two levels at most. | 15:49 |
tongli | @dmellado, why adding tox will change the structure, I do not get that. | 15:50 |
Rockyg | And that's a variable name | 15:50 |
dmellado | tongli: not that it will change the structure | 15:50 |
dmellado | using cookicutter to generate the main template structure | 15:50 |
dmellado | will make easier to use tox | 15:50 |
topol | dmellado do you know how to use cookiecutter? | 15:51 |
dmellado | tongli: http://paste.openstack.org/show/590977/ | 15:51 |
dmellado | for example | 15:51 |
dmellado | this will be the default cookicutter template | 15:51 |
dmellado | just using a demo name | 15:51 |
tongli | @dmellado, ok, so you are ok with /<toolname>/<os_client_library_name>/<workloadname> | 15:51 |
tongli | that is a clear pattern, | 15:51 |
tongli | if that helps. | 15:51 |
dmellado | I'd be fine with that, but inside the default openstack project template ;) | 15:51 |
Rockyg | ++ | 15:52 |
tongli | <toolname> will be whatever tool we feel good about. | 15:52 |
tongli | <os_client_library_name> , I can think of is shade, and OS Restful API. | 15:52 |
topol | dmellado so the tricky part is what is the structure in interop_workloads correct? | 15:52 |
dmellado | topol: yeah, on that I'm fine with tongli's approach | 15:53 |
Rockyg | maybe oaktree in future or heat or murano | 15:53 |
topol | dmellado your proposal to have the standard strcuture to make tox easy makes sense to me | 15:53 |
tongli | @Rockyg, certainly if we have people wanting to creat heat and murano. then there will be new top level directories. | 15:53 |
topol | tongli you mean top levels under interop_workloads ? | 15:54 |
tongli | @topol, if heat, or murano, it will be like this /heat/cloudfoundry /heat/nfv. | 15:55 |
tongli | I assume heat will always use OS apis. | 15:55 |
*** edtubill has joined #openstack-meeting-cp | 15:55 | |
tongli | @topol, yes to your question. | 15:56 |
topol | dmellado would /heat be a top level or under the directory interop_worlkoads? | 15:56 |
dmellado | topol: I'd be fine with that | 15:56 |
topol | tongli ok good | 15:56 |
tongli | @topol, @dmellado, should be top level under interop_workloads. | 15:56 |
dmellado | baically we can have whatever internal structure we want, as long as we keep that under interop_workloads | 15:57 |
Rockyg | topol, everything will be under interop_workloads | 15:57 |
dmellado | yeah | 15:57 |
Rockyg | That's the root. | 15:57 |
tongli | so we should have these under interop_workloads /ansible, /terraform, /heat, /murano | 15:57 |
Rockyg | the repo name. | 15:57 |
dmellado | exactly as Rockyg says ;) | 15:57 |
topol | so we are looking at heat, murano, ansible etc under the root (interop_workloads) | 15:57 |
*** ruan_09 has joined #openstack-meeting-cp | 15:57 | |
Rockyg | yup | 15:57 |
topol | I like that | 15:57 |
tongli | and /doc to hold all the documents in rst format | 15:57 |
*** samueldmq has joined #openstack-meeting-cp | 15:58 | |
topol | any concernswith that structure ? | 15:58 |
dmellado | tongli: cookiecutter already provides you with a root doc | 15:58 |
Rockyg | yup. and doc would use the dic tox rules | 15:58 |
dmellado | as well as releasenotes with reno | 15:58 |
* topol always good to stay in the groove when using tools like tox | 15:58 | |
tongli | @dmellado, ok, not familiar with cooklecutter thing, we can work together on that if it helps. | 15:58 |
dmellado | tongli: totally | 15:59 |
Rockyg | cookiecutter does all the base installs if we follow the structure | 15:59 |
dmellado | tongli: in any case it's quite straightforward | 15:59 |
topol | #action dmellado,tongli use cookie cutter to create agreed to structure | 15:59 |
dmellado | think of it as a openstack project template system | 15:59 |
dmellado | http://docs.openstack.org/infra/manual/creators.html#preparing-a-new-git-repository-using-cookiecutter | 15:59 |
tongli | @Rockyg, it just a tool to create an initial project? | 15:59 |
dmellado | tongli: it is | 15:59 |
topol | #agree have these under interop_workloads /ansible, /terraform, /heat, /murano | 16:00 |
Rockyg | yup. and populate the top level | 16:00 |
tongli | @dmellado, ok, I will take a look. | 16:00 |
topol | I think we are out of time. But made great progress here | 16:00 |
dmellado | tongli: in any case totally up for preparing that together | 16:00 |
tongli | a patch to setup the structure will be submitted soon. | 16:00 |
Rockyg | dmellado, posted the doc link for it earlier | 16:00 |
*** spilla has joined #openstack-meeting-cp | 16:00 | |
topol | dmellado,tongli thanks for the helpful suggestions here | 16:01 |
* markvoelker notes that we're out of time and the defcore meeting is starting over on #openstack-meeting-3 | 16:01 | |
tongli | @Rockyg, I will dig. thanks. | 16:01 |
topol | #endmeeting | 16:01 |
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings" | 16:01 | |
openstack | Meeting ended Wed Nov 30 16:01:12 2016 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 16:01 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/interop_challenge/2016/interop_challenge.2016-11-30-15.00.html | 16:01 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/interop_challenge/2016/interop_challenge.2016-11-30-15.00.txt | 16:01 |
openstack | Log: http://eavesdrop.openstack.org/meetings/interop_challenge/2016/interop_challenge.2016-11-30-15.00.log.html | 16:01 |
lbragstad | #startmeeting policy | 16:01 |
openstack | Meeting started Wed Nov 30 16:01:20 2016 UTC and is due to finish in 60 minutes. The chair is lbragstad. Information about MeetBot at http://wiki.debian.org/MeetBot. | 16:01 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 16:01 |
*** openstack changes topic to " (Meeting topic: policy)" | 16:01 | |
openstack | The meeting name has been set to 'policy' | 16:01 |
lbragstad | raildo, ktychkova, dolphm, dstanek, rderose, htruta, atrmr, gagehugo, lamt, thinrichs, edmondsw, ruan_09 | 16:01 |
topol | THANks everyoen | 16:01 |
dolphm | \o/ | 16:01 |
rderose | o/ | 16:01 |
dstanek | o/ | 16:01 |
lamt | o/ | 16:01 |
ruan_09 | o/ | 16:01 |
*** skazi_ has quit IRC | 16:01 | |
*** thinrichs has joined #openstack-meeting-cp | 16:02 | |
*** gouthamr has quit IRC | 16:02 | |
ktychkova | o/ | 16:03 |
*** edmondsw has joined #openstack-meeting-cp | 16:03 | |
*** gagehugo has joined #openstack-meeting-cp | 16:03 | |
gagehugo | o/ | 16:03 |
samueldmq | hi all | 16:03 |
lbragstad | hello, good morning, good afternoon | 16:04 |
*** gema has left #openstack-meeting-cp | 16:04 | |
lbragstad | #topic action items from last week | 16:04 |
*** openstack changes topic to "action items from last week (Meeting topic: policy)" | 16:04 | |
*** JASON___ has quit IRC | 16:04 | |
lbragstad | #link https://etherpad.openstack.org/p/keystone-policy-meeting | 16:04 |
lbragstad | agenda in case anyone doesn't have the link ^ | 16:04 |
lbragstad | for those who missed it - we spent last week discussing ayoung's proposal and going through ktychkova's Apache Fortress example | 16:05 |
*** piet_ has joined #openstack-meeting-cp | 16:05 | |
lbragstad | #link https://review.openstack.org/#/c/391624/ to ayoung's RBAC spec | 16:05 |
*** jaugustine has joined #openstack-meeting-cp | 16:06 | |
lbragstad | I, personally, have a bunch of questions left on that spec, but ayoung isn't here so we can circle back if he shows up | 16:06 |
lbragstad | did anyone have a chance to look at #link https://review.openstack.org/#/c/237521/ ? | 16:07 |
ruan_09 | yes, I've studied it | 16:07 |
lbragstad | ^ which was ktychkova's PoC on apache fortress | 16:07 |
lbragstad | i think the big hurdle we uncovered with that last week was the AF doesn't really allow scope - right? | 16:07 |
lbragstad | all role assignments are global | 16:07 |
dstanek | i thought it was interesting, but should be more configurable | 16:08 |
ruan_09 | we should distingush 2 things: the approach to externalize PDP and AF's capacity to modelize policies | 16:08 |
dstanek | i started reviewing but never finished | 16:08 |
ktychkova | yes, it's right | 16:08 |
ktychkova | But I think, it is AF problem anf AF's users :) | 16:08 |
lbragstad | (and I think one of the workarounds was to duplicate role assignments) | 16:08 |
edmondsw | I didn't get a chance to dive into it, but just going off the commit message it doesn't seem to address checks that have more than 2 levels | 16:08 |
ruan_09 | whether scoped or not scopted, it's up to each PDP | 16:09 |
samueldmq | edmondsw: what's 2+ level checks ? | 16:09 |
edmondsw | looking for an example... | 16:09 |
*** jgriffith_away is now known as jgriffith | 16:09 | |
edmondsw | e.g. from neutron: create_network:provider:network_type | 16:10 |
samueldmq | edmondsw: ok so those are checks on resources that come from database | 16:10 |
samueldmq | afaik AF only takes care of RBAC | 16:10 |
edmondsw | well, from code... may or may not be db code | 16:11 |
edmondsw | this is RBAC | 16:11 |
samueldmq | no this is not | 16:11 |
edmondsw | ? | 16:11 |
samueldmq | RBAC is purely authz based on roles | 16:11 |
edmondsw | and this isn't why? | 16:11 |
samueldmq | this is something more than RBAC, because this is not a role | 16:11 |
edmondsw | it is a role check | 16:11 |
ruan_09 | we've sightly modified the 237521, and it works for another external PDP | 16:11 |
samueldmq | edmondsw: ok. neutron: create_network:provider:network_type is a role check | 16:12 |
lbragstad | ruan_09 which one? | 16:12 |
samueldmq | I thought it was not. anyways this is not the point here, sorry (don't want to discuss naming) | 16:12 |
edmondsw | samueldmq, let's say you want to allow someone to create some kinds of networks but not others, based on their role... you define multiple policy checks, so you can check different roles depending on what the request is asking to create | 16:13 |
ruan_09 | the one we used in OPNFV: https://wiki.opnfv.org/display/moon/Moon | 16:13 |
*** ravelar has joined #openstack-meeting-cp | 16:14 | |
ruan_09 | I sugget to make the 237521 as a generic hook to external PDP | 16:15 |
thinrichs | Can we finish the example? create_network:provider:network_type is a role check on the user asking to create a network? Or it's a role check on the network they're trying to create? | 16:15 |
edmondsw | thinrichs a check of the user's role to make sure they are allowed to create the type of network they're trying to create | 16:16 |
lbragstad | https://github.com/openstack/neutron/blob/master/etc/policy.json#L50-L56 | 16:17 |
edmondsw | so I think only the service (in this case neutron) can do that, because only it is parsing the request and will know which of these kinds of checks it needs to test and which it does not | 16:17 |
ktychkova | I'm not ready to show how (if) AF can work in such example. I'm sure I can answer next week. I need to run some tests | 16:18 |
lbragstad | edmondsw means it's up to the service to check ownership | 16:19 |
lbragstad | meaning* | 16:19 |
edmondsw | lbragstad, yeah, I don't know much about AF but I can't see how something else would easily do that | 16:19 |
edmondsw | lbragstad, though I'm not sure ownership is the right word | 16:20 |
lbragstad | edmondsw right - not AF specific, but just the way policy currently is in openstack | 16:20 |
*** reed_ has joined #openstack-meeting-cp | 16:20 | |
lbragstad | samueldmq does that answer your question? | 16:21 |
samueldmq | lbragstad: edmondsw yes if the only check behind it is a role check that's pure rbac | 16:21 |
edmondsw | samueldmq of course policy.json today would let you check other things besides the roles, but usually it's just checking role, as the default does in that example | 16:22 |
ruan_09 | if you want to know more about RBAC and its future, I suggest you to read http://www.profsandhu.com/miscppt/kth_abac_141029.pdf | 16:23 |
ruan_09 | the father of RBAC's recent presentation | 16:23 |
samueldmq | edmondsw: yes, you're right ++ | 16:23 |
ruan_09 | rbac can't solve all the problem | 16:24 |
dstanek | ruan_09: thx for the link | 16:25 |
dstanek | more background is always helpful to folks for these types of asks | 16:26 |
lbragstad | dstanek ruan_09 ++ | 16:26 |
lbragstad | so we had another action item to document what we expect from policy at this level | 16:27 |
ruan_09 | maybe we can firstly enable external PDP, than take a look at implementations to decide which one to use? | 16:28 |
lbragstad | do we expect it to be flexible enough for applications running on openstack? | 16:28 |
lbragstad | do we expect it to only work across services? | 16:28 |
thinrichs | I would think apps are out of scope and we should focus on ops. | 16:28 |
lbragstad | thinrichs i think i would agree with that | 16:29 |
thinrichs | What do you mean "work across services"? Do you mean decisions can be based on data/conditions from multiple services? | 16:29 |
lbragstad | thinrichs kind of like what it does today where you have policy that protects operations across openstack services | 16:30 |
thinrichs | Or do you mean you want access control to apply to *sequences* of API calls that span services? | 16:30 |
thinrichs | lbragstad: got it. Do we want a logically centralized way of expressing policy that applies to all services? | 16:31 |
lbragstad | thinrichs centralized in what sense? | 16:31 |
lbragstad | as in owned by a single service? | 16:31 |
ruan_09 | defined policy in one place and enforced by all services? | 16:32 |
thinrichs | Not sure what ownership means here. I'd say users want a single place to go see/write policy... | 16:32 |
thinrichs | and that implementationally we'd want each service to enforce that policy independently. | 16:32 |
dstanek | does any of this account for having policy created by someone other than cloud admins? domain admins for example? | 16:33 |
*** ayoung has joined #openstack-meeting-cp | 16:33 | |
ayoung | Sorry I'm late. | 16:33 |
thinrichs | dstanek: I'd think the multiple-author issue is one of how do we express policy--how do we make it easy for multiple people to collaboratively define policy? | 16:33 |
lbragstad | thinrichs does having it in a single place imply being easy to manage? | 16:34 |
thinrichs | I've definitely heard from folks saying that having N places to edit/maintain policy makes it hard to manage. Not saying 1 place necessarily makes it easy, but certainly easier. | 16:34 |
lbragstad | thinrichs or is the easy of management mean more than it being centralized or not? | 16:34 |
ayoung | So, we have 3 levels of policy mechanisms that we have discussed. | 16:34 |
ayoung | Aside from the existing "edit it everywhere" | 16:35 |
thinrichs | Definitely want more than just centralization to make it easy to manage. | 16:35 |
lbragstad | thinrichs i would agree with that | 16:35 |
ayoung | 1. Is external, like the Fortress case, where we make the policy decisions on a remote system | 16:35 |
ayoung | 2. is the dynamic policy pproach we pursued until last year | 16:35 |
ayoung | 3. is leavethe existing policy alone and add an additional layer oin top | 16:35 |
thinrichs | What's the difference between (1) and (2)? | 16:36 |
*** reed_ has quit IRC | 16:36 | |
ayoung | 3 Only works if we are OK with the existing checks, but want to perform more filtering, which is the only thing I think will actually make it through | 16:36 |
ayoung | thinrichs: in 2 the policy decision is still made inside the service; | 16:36 |
ayoung | we just fetch the remote policy rules from a central repo...Keystone policy | 16:37 |
ayoung | it had some real shortcomings | 16:37 |
thinrichs | I'd want to separate out PAP (administering/authoring policy) from PEP (enforcing). | 16:37 |
ayoung | the first was that the nodes did not know their own identity to fetch the proper policy file | 16:37 |
thinrichs | 1. is then PAP and PEP are the same (external) | 16:37 |
lbragstad | does anyone here talk to their operators, or know what kind of changes are made to policy in the real world? When folks change it, are they changing the whole thing or just making small tweaks? (edmondsw) | 16:37 |
thinrichs | 2. is then PAP is external and PEP is local | 16:37 |
ayoung | to, in that approach, the PAP was Keystone, the PEP/PDP was oslo-policy call inside the service | 16:37 |
ayoung | thinrichs: yep | 16:38 |
ruan_09 | thinrichs: no, PAP/PDP is external, PEP is inside each service | 16:38 |
ayoung | So, we do have a way we can continue on there. | 16:38 |
lbragstad | currently the PAP is whatever configuration management system you use for your deployment | 16:38 |
thinrichs | 3. Not sure what this one is...probably PEP is local and PAP is external | 16:38 |
ayoung | The issue was that we were trying to name the policy files by Endpoint. There was a proces problem there: | 16:39 |
edmondsw | lbragstad, I think the feedback at the summits has been that folks are increasingly changing more in policyin Tokyo was the folks were and Austin (I missed Barca) was that | 16:39 |
edmondsw | ignore the end of that... | 16:39 |
thinrichs | lbragstad: that's interesting--using CAPS as your PAP | 16:39 |
edmondsw | as for me, I'm overriding lots of policy.json defaults | 16:39 |
edmondsw | maybe 90% | 16:39 |
lbragstad | O.O | 16:40 |
ayoung | endpoints are essentially only named by their URLs in theservice catalog. The install tools balked at the idea that you would: register the endpoint, get an endpoint ID from Keystone, add that to the config mgmt, update the config and then use that ID to fetch policy | 16:40 |
ayoung | but...it turns out we really don't need to do that. | 16:40 |
ayoung | we just need a name to fetch policy by. Does not need to be anything other than human readable, and pre-calculatable | 16:40 |
ayoung | and that is the approach I am using in the RBAC-Middleware approach | 16:40 |
lbragstad | thinrichs well - partially, because you typically have a CMS to lay down config, and policy.json is currently considered configuration... but the other part would be the keystone role API. | 16:40 |
ayoung | we could do the same for Dynamic policy | 16:41 |
ayoung | lbragstad: and that was a huge driving factor; people were more comfortable with policy in CMS than in a dynamic store like Keystone. | 16:41 |
ayoung | Which is another reason I pursued the RBAC in Middleware approach instead | 16:41 |
ayoung | RBAC is "on top" of existing policy | 16:42 |
ayoung | as none of the existing policy files check roles on most calls | 16:42 |
edmondsw | ayoung ? | 16:42 |
lbragstad | they check *a* role | 16:43 |
ayoung | the shortcoming, of course, is that it does only RBAC, not the full ABAC, but then again, oslo-policy can only do ABAC if the services provide the attributes | 16:43 |
lbragstad | right | 16:43 |
ayoung | and we don't know what attributes they are going to provide...it is all over the place, as jamielennox found | 16:43 |
lbragstad | which i don't necessarily consider a bad thing | 16:43 |
ayoung | lbragstad: not even A role...just that the token has a project | 16:43 |
edmondsw | default policy.json can't check roles, plural, today because there is only one default role :) | 16:43 |
ayoung | it implies a role, but a wierd token with 0 roles but a project ID would pass the policy check. | 16:44 |
edmondsw | ayoung, a large number of the default checks look for the admin role | 16:45 |
lbragstad | edmondsw from your perspective, what would assist you in making less policy overrides? | 16:45 |
ayoung | So...those are some of the driving reasons I posted this https://review.openstack.org/#/c/391624/ | 16:45 |
ayoung | edmondsw: yep, and we still need to complete the 968696 work around that | 16:45 |
edmondsw | lbragstad, fixing each service individually... this is a nova problem, a neutron problem, a glance problem, etc... not a keystone problem | 16:45 |
ayoung | an admin api is an admin API. | 16:46 |
ayoung | Opening it up will still required a policy.json change | 16:46 |
edmondsw | making each service (nova, etc.) consistent with the others would be a big help | 16:46 |
lbragstad | edmondsw would you say nova's approach to putting policy in code (oslo) moves in the right direction? | 16:46 |
edmondsw | I think what nova did, and cinder is now doing, to move policy defaults into code instead of policy.json is a good step as well... makes the files much more readable | 16:46 |
lbragstad | edmondsw or is the moot to you? | 16:46 |
edmondsw | and the yaml support | 16:47 |
ayoung | The other thing that is lacking is the ability to map from the policy rules to the APIs | 16:47 |
edmondsw | ayound the admin_api rule you're referring to checks for the admin role... | 16:47 |
ayoung | with the RBAC in MIddlewaree, we enforce policy on the URL pattern, instead of some cutpoint inside the code | 16:47 |
ayoung | and I have an analogy. | 16:47 |
ayoung | RBAC is like file permisions, oslo-policy is like SE Linux | 16:48 |
lbragstad | ayoung have you seen dstanek's example on your review? | 16:48 |
lbragstad | regarding the URL pattern? | 16:48 |
edmondsw | ayoung, you can enforce a certain level of RBAC on the URL pattern, but there are RBAC cases that won't have the information you need in the URL | 16:48 |
edmondsw | we were discussing one before you joined | 16:48 |
ayoung | edmondsw: yep...anything on the payload or even query parameters are not covered yet | 16:49 |
ayoung | and anything on the object fetched from the database is out, too | 16:49 |
edmondsw | e.g. you ask to create a network, and neutron checks not only that you can create a network (one policy check) but also that you can specify what you did in the request body (potentially multiple additional policy checks) | 16:49 |
edmondsw | good, we're on the same page :) | 16:50 |
ayoung | edmondsw: so nothing prevents that from happening now, but my understanding is that the code that performs things like that are very much hard coded, and should be left to the neutron team to affect. | 16:50 |
edmondsw | agreed | 16:50 |
ayoung | if there are cases where they need aspecific role to perform something, that is going to be new effort as well | 16:51 |
ayoung | There is nothing keeping the Neutron team from using the RBAC mechanism later on if they want to, just that I am not driving the implementation off those use cases | 16:51 |
dstanek | ayoung: i'd love to get all of the usecase documented so that we can apply the solutions to them and see how they compare | 16:52 |
lbragstad | dstanek ++ | 16:52 |
thinrichs | +1 | 16:52 |
ayoung | It is a free form pattern. They could bastarize it to do anything they wanted, so long as they can map a patter to a role | 16:52 |
ayoung | neutron policy is not something that the end deployer should be breaking.... | 16:52 |
edmondsw | breaking? | 16:52 |
ayoung | editing the policy.json file and changing in such a way it triggers a 500 error | 16:53 |
lbragstad | 8 minute warning | 16:53 |
edmondsw | if you add roles, then you have no choice but to edit policy | 16:53 |
ayoung | edmondsw: hence my spec and approach using RBAC in middleware | 16:54 |
dstanek | ayoung: i still think there are cases where the role must exist in the policy file | 16:54 |
ayoung | that is what https://review.openstack.org/#/c/391624/ is proposing | 16:54 |
edmondsw | ayoung sorry, I haven't read the latest version yet... I had I think 50+ comments on the version I did read, but I think you were going to rewrite after that | 16:54 |
ayoung | dstanek: heh...probably, but we can defer that until we can do roles in general | 16:54 |
edmondsw | dstanek definitely | 16:55 |
ayoung | I would suggest we resurrect dynamic policy once we get that one in | 16:55 |
ayoung | the idea of fetching policy by 'tag' as opposed to service name to do endpoint specific policy will work | 16:55 |
edmondsw | we're not suggesting that the ability for neutron, etc. to make checks against the role would be removed, are we? Because that is a non-starter | 16:55 |
dstanek | ayoung: my problem with what we have been talking about so far is that i don't see the vision. so i don't know if the steps get us there | 16:55 |
ayoung | the default is that nova would fetch the "compute" policy | 16:55 |
ayoung | dstanek: I would say that the vision is 3 things: | 16:56 |
ayoung | 1. perform the RBAC check based on the URL, not a rnadom string, so that people can fugyure out what they need to delegate | 16:56 |
lbragstad | edmondsw the spec pulls the role check into keystonemiddleware | 16:56 |
edmondsw | s/the/an/ | 16:56 |
ayoung | 2. make it possible for people to create more fine grained roles for a set of a tasks so they can delegate a subset of them | 16:57 |
ayoung | 3. make it possible for people to change the RBAC for operations without breaking the parts of policy that require engineering knowledge of the remote systems | 16:57 |
edmondsw | you can pull A role check into keystonemiddleware, but you can't prevent the service from doing additional checks also based on role | 16:58 |
lbragstad | edmondsw that's kinda what dstanek left as a comment | 16:58 |
ayoung | I did my best to make it as clear as possible in that spec. | 16:58 |
lbragstad | alright - two minutes left, but it sounds like we still have a lot to discuss on the sepc | 16:59 |
lbragstad | I'd like to have folks bring some more ideas to the meeting next week | 16:59 |
ayoung | can we agree to all read and comment on the spec for next week? And maybe have a vote on it to pursue or not? | 16:59 |
lbragstad | so if you have any - please add them as agenda items | 16:59 |
thinrichs | ayoung: I'm still missing the big picture. Are you aiming for a single PAP for all services? Are you envisioning the possibility of an external PDP? Are you committed to local PEPs? | 16:59 |
lbragstad | ayoung i have been | 16:59 |
ayoung | thinrichs: single PAP. local PEPs | 17:00 |
ayoung | thinrichs: Only RBAC | 17:00 |
edmondsw | I think #3 touches on another pain point I have changing policy... that lots of times one check leads to another down the line in unexpected ways... e.g. creating a new server/vm is going to end up causing checks in neutron, cinder, glance, etc. | 17:00 |
lbragstad | ruan_09 your proposal for external PDP would be good for that | 17:00 |
ruan_09 | ok, | 17:00 |
lbragstad | alright - let's continue in #openstack-keystone if needed | 17:00 |
lbragstad | #endmeeting | 17:00 |
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings" | 17:00 | |
openstack | Meeting ended Wed Nov 30 17:00:48 2016 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 17:00 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/policy/2016/policy.2016-11-30-16.01.html | 17:00 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/policy/2016/policy.2016-11-30-16.01.txt | 17:00 |
openstack | Log: http://eavesdrop.openstack.org/meetings/policy/2016/policy.2016-11-30-16.01.log.html | 17:00 |
*** edmondsw has left #openstack-meeting-cp | 17:01 | |
*** thinrichs has quit IRC | 17:02 | |
*** gagehugo has left #openstack-meeting-cp | 17:04 | |
*** spilla has left #openstack-meeting-cp | 17:06 | |
*** ayoung has quit IRC | 17:16 | |
*** topol has quit IRC | 17:17 | |
*** MarkBaker has quit IRC | 17:18 | |
*** MarkBaker has joined #openstack-meeting-cp | 17:19 | |
*** jgriffith is now known as jgriffith_away | 17:19 | |
*** tongli has quit IRC | 17:30 | |
*** ruan_09 has quit IRC | 17:45 | |
*** MarkBaker has quit IRC | 17:52 | |
*** MarkBaker has joined #openstack-meeting-cp | 17:53 | |
*** jgriffith_away is now known as jgriffith | 18:00 | |
*** MarkBaker has quit IRC | 18:06 | |
*** topol has joined #openstack-meeting-cp | 18:23 | |
*** edtubill has quit IRC | 18:26 | |
*** lyarwood_ is now known as lyarwood | 18:30 | |
*** kbyrne has quit IRC | 18:35 | |
*** kbyrne has joined #openstack-meeting-cp | 18:38 | |
*** MarkBaker has joined #openstack-meeting-cp | 19:02 | |
*** diablo_rojo_phon has joined #openstack-meeting-cp | 19:12 | |
*** MarkBaker has quit IRC | 20:07 | |
*** gouthamr has joined #openstack-meeting-cp | 20:09 | |
*** edtubill has joined #openstack-meeting-cp | 20:18 | |
*** lyarwood is now known as lyarwood_ | 20:19 | |
*** lyarwood_ is now known as lyarwood | 20:19 | |
*** MarkBaker has joined #openstack-meeting-cp | 21:14 | |
*** diablo_rojo_phon has quit IRC | 21:28 | |
*** gouthamr has quit IRC | 21:49 | |
*** edtubill has quit IRC | 22:40 | |
*** rarcea_ has quit IRC | 23:06 | |
*** ravelar has quit IRC | 23:14 | |
*** topol has quit IRC | 23:20 | |
*** ravelar has joined #openstack-meeting-cp | 23:26 | |
*** beekhof has joined #openstack-meeting-cp | 23:29 | |
*** ravelar has quit IRC | 23:33 | |
*** sdague has quit IRC | 23:38 | |
*** lamt has quit IRC | 23:44 | |
*** xyang1 has quit IRC | 23:50 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!