*** JaiRoc_ has joined #openstack-meeting-cp | 01:28 | |
*** JaiRoc_ has left #openstack-meeting-cp | 01:29 | |
*** sdake has quit IRC | 02:29 | |
*** sdake has joined #openstack-meeting-cp | 03:21 | |
*** sdake_ has joined #openstack-meeting-cp | 03:38 | |
*** sdake has quit IRC | 03:40 | |
*** coolsvap has joined #openstack-meeting-cp | 03:41 | |
*** gouthamr has quit IRC | 04:56 | |
*** prateek_ has joined #openstack-meeting-cp | 05:26 | |
*** prateek_ has quit IRC | 05:31 | |
*** mugsie has quit IRC | 05:50 | |
*** mugsie has joined #openstack-meeting-cp | 05:51 | |
*** sdake has joined #openstack-meeting-cp | 06:22 | |
*** sdake_ has quit IRC | 06:23 | |
*** sdake_ has joined #openstack-meeting-cp | 07:14 | |
*** sdake_ has quit IRC | 07:14 | |
*** sdake has quit IRC | 07:15 | |
*** sdake has joined #openstack-meeting-cp | 07:26 | |
*** zigo has quit IRC | 07:39 | |
*** zigo has joined #openstack-meeting-cp | 07:42 | |
*** zigo is now known as Guest24756 | 07:42 | |
*** sdake has quit IRC | 07:58 | |
*** Guest24756 is now known as zigo | 08:54 | |
*** prateek has joined #openstack-meeting-cp | 09:26 | |
*** sdake has joined #openstack-meeting-cp | 09:44 | |
*** rarcea has joined #openstack-meeting-cp | 10:50 | |
*** rarcea has quit IRC | 10:51 | |
*** rarcea has joined #openstack-meeting-cp | 10:51 | |
*** sdake_ has joined #openstack-meeting-cp | 11:12 | |
*** sdake has quit IRC | 11:14 | |
*** jrollen has joined #openstack-meeting-cp | 11:36 | |
*** gouthamr has joined #openstack-meeting-cp | 11:41 | |
*** sdake_ is now known as sdake | 11:52 | |
*** jrollen has quit IRC | 12:18 | |
*** jrollen has joined #openstack-meeting-cp | 12:41 | |
*** jrollen has quit IRC | 12:45 | |
*** jrollen has joined #openstack-meeting-cp | 12:45 | |
*** jroll has quit IRC | 12:53 | |
*** jrollen is now known as jroll | 12:55 | |
*** prateek has quit IRC | 13:03 | |
*** xyang1 has joined #openstack-meeting-cp | 14:39 | |
*** rhe00 has left #openstack-meeting-cp | 15:37 | |
*** sdake has quit IRC | 16:30 | |
*** sdake has joined #openstack-meeting-cp | 16:34 | |
*** piet_ has joined #openstack-meeting-cp | 16:54 | |
*** mriedem has joined #openstack-meeting-cp | 16:58 | |
ildikov | #startmeeting cinder-nova-api-changes | 17:00 |
---|---|---|
openstack | Meeting started Mon Oct 3 17:00:00 2016 UTC and is due to finish in 60 minutes. The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot. | 17:00 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 17:00 |
*** openstack changes topic to " (Meeting topic: cinder-nova-api-changes)" | 17:00 | |
openstack | The meeting name has been set to 'cinder_nova_api_changes' | 17:00 |
ildikov | scottda ildikov DuncanT ameade cFouts johnthetubaguy jaypipes takashin alaski e0ne jgriffith tbarron andrearosa hemna erlon mriedem gouthamr ebalduf patrickeast smcginnis diablo_rojo gsilvis xyang1 raj_singh | 17:00 |
mriedem | o/ | 17:00 |
cFouts | hi | 17:00 |
scottda | hi | 17:00 |
jgriffith | present | 17:00 |
ildikov | hi All :) | 17:01 |
mriedem | i have a quick update once we're done waiting for people | 17:01 |
ildikov | should I do a quick ping on the Cinder channel? | 17:02 |
mriedem | shrug | 17:02 |
ildikov | ok, pinged a few guys | 17:03 |
ildikov | mriedem: I think we can move to your update | 17:03 |
mriedem | ok, we have a working nova-initiated swap volume test in tempest now http://logs.openstack.org/73/374373/2/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/cb37dbc/console.html#_2016-10-03_16_12_03_518757 | 17:03 |
mriedem | not merged yet | 17:04 |
mriedem | stack starts here: https://review.openstack.org/#/c/374373/ | 17:04 |
ildikov | mriedem: wow, that looks great!!! | 17:04 |
hemna | doink | 17:04 |
jgriffith | sweet | 17:04 |
hemna | ooh nice | 17:05 |
mriedem | it's gotten decent review in tempest though so i expect that to be merged this week | 17:05 |
ildikov | mriedem: that sounds pretty good | 17:06 |
ildikov | scottda: mriedem: did we have other tests in mind that we would need? | 17:06 |
mriedem | the only other thing i had was i did the thorough review on removing volume_api.check_attach in nova https://review.openstack.org/#/c/335358/ | 17:06 |
mriedem | ildikov: scottda's tempest change for retype doesn't trigger volume swap in nova | 17:06 |
ildikov | I mean in general, not for the new API | 17:06 |
mriedem | i'm not really sure why | 17:06 |
mriedem | but i don't know much about retype | 17:07 |
scottda | ildikov: Yes, we need to run tempest test on 2-node cinder to call swap_volume | 17:07 |
scottda | retype from lvm->lvm on the same node won't call swap_volume | 17:07 |
scottda | Just some "smarts" in lvm driver code. | 17:07 |
ildikov | scottda: ok, I'll add notes to the etherpad | 17:07 |
mriedem | scottda: will it be a retype test? | 17:08 |
mriedem | or volume migration? | 17:08 |
scottda | But I *think* we can make it work for 2-node. Sounds like it would duplicate coverage of swap_volume. So useful, but not imperitive | 17:08 |
scottda | mriedem: Yes, it would be cinder retype-with-migration | 17:08 |
scottda | so both | 17:08 |
mriedem | ok | 17:08 |
mriedem | not a total swap_volume duplicate, | 17:08 |
mriedem | because the test above that i'm working on is nova-api initiated, not cinder | 17:09 |
mriedem | so there is less back and forth | 17:09 |
scottda | ok, cool. So still usefull for call to novaclient, etc. | 17:09 |
mriedem | yes | 17:09 |
ildikov | mriedem: I've just gotten there to check your comments | 17:09 |
ildikov | I don't know the answer to all questions you raised | 17:10 |
*** piet_ has quit IRC | 17:10 | |
ildikov | like what happens with unreserve_volume if we never called reserve, but end up with the cleanup path | 17:10 |
*** piet_ has joined #openstack-meeting-cp | 17:11 | |
ildikov | does unreserve do volume status check? | 17:11 |
ildikov | scottda: hemna: ^^ | 17:11 |
mriedem | i checked | 17:11 |
mriedem | and it does | 17:11 |
mriedem | so i think that would fail | 17:11 |
mriedem | with a conflict | 17:11 |
ildikov | :( | 17:12 |
hemna | cinder volume has to be in-use | 17:12 |
hemna | https://github.com/openstack/cinder/blob/master/cinder/volume/api.py#L635-L638 | 17:12 |
hemna | in-use or available I think by the looks of that | 17:12 |
ildikov | hemna: what if something goes wrong after calling reserve and then we call unreserve to clean things up? | 17:13 |
ildikov | as reserve moves it to 'attaching' IIRC | 17:13 |
mriedem | if it allows available, | 17:13 |
hemna | meaning the volume is in error state before calling unreserve ? | 17:13 |
mriedem | then it seems unreserve would be fine if you never called reserve | 17:13 |
scottda | expected status must be "attaching" | 17:13 |
hemna | if an error happens between reserve and unreserve....then...it's in error state and it's borked. | 17:13 |
hemna | scottda, oh yah you are right. I didn't see line 632 | 17:14 |
ildikov | hemna: we are talking more about issues on Nova side because of which we want to undo the steps like volume attach | 17:14 |
mriedem | yeah https://github.com/openstack/cinder/blob/master/cinder/volume/api.py#L632 | 17:14 |
hemna | my bad | 17:14 |
scottda | And that conditional update stuff is specifically to eliminate races. So it cannot change on the cinder side. | 17:15 |
mriedem | so the reason ildikov is asking, | 17:16 |
mriedem | is because in this change https://review.openstack.org/#/c/335358/17/nova/compute/api.py | 17:16 |
mriedem | based on the version of the compute, the nova-api may or may not reserve the volume | 17:16 |
mriedem | on the compute it's unreserving the volume https://review.openstack.org/#/c/335358/17/nova/compute/manager.py | 17:16 |
ildikov | the history is that we identified one place where reserve_volume was not called | 17:16 |
ildikov | and by removing check_attach it now matters | 17:17 |
mriedem | in case _prep_block_device fails | 17:17 |
ildikov | or matters more and we're now trying to cover the failure scenarios for cleanup, ie unreserve | 17:17 |
mriedem | my question is more clearly asked here https://review.openstack.org/#/c/335358/17/nova/conductor/manager.py | 17:17 |
hemna | so looking at the cinder api call | 17:18 |
hemna | if it's not in attaching, you'll get an error | 17:18 |
mriedem | btw, is that a 400 or 409? | 17:19 |
hemna | I'll try and track that down | 17:19 |
hemna | it's not obvious | 17:20 |
scottda | I'd like to make sure that with the new API we're not creating all these exceptional cases that are all shoved into one API call. | 17:20 |
ildikov | mriedem: also have you checked the other calls of _check_attach_and_reserve_volume regarding unreserve? I kinda thought those are covered, but surely haven't checked all scenarios | 17:20 |
mriedem | ildikov: no that's why i asked in https://review.openstack.org/#/c/335358/17/nova/compute/api.py@3394 | 17:21 |
ildikov | ok, I'll try to check then, I read your comment more like you're leaning towards it's not done | 17:22 |
mriedem | ildikov: i believe it's because with attach_volume in the compute api, | 17:22 |
*** tongli has joined #openstack-meeting-cp | 17:22 | |
mriedem | it reserves the volume in the api, then casts to the compute to attach, | 17:23 |
mriedem | and if the attach on the compute fails, the compute unreserves | 17:23 |
ildikov | I can fix additional occurrences if they're missing, but it would be pretty odd | 17:23 |
mriedem | however, if the cast to the compute failed for some reason, we'd leave the volume orphaned and you'd have to reset the state to get it back | 17:23 |
ildikov | yeap, that's right | 17:23 |
hemna | anyway, I can't make sense of the response to the conditional_update failure | 17:23 |
hemna | and how the wsgi converts that into a failure | 17:24 |
hemna | 400 vs 409 | 17:24 |
mriedem | ildikov: this attempts to fix that https://review.openstack.org/#/c/290793/ | 17:24 |
mriedem | hemna: yeah i couldn't either | 17:24 |
* hemna isn't happy about it | 17:24 | |
scottda | gorka would have the answer, I think. But it should't require a genius to figure it out. | 17:24 |
hemna | scottda, +1 | 17:25 |
hemna | it's stupid that it's this hard to figure it out | 17:25 |
ildikov | mriedem: ah ok, I got it fully now what you mean, but that's kinda independent from the check_attach removal and it's a generic how to handle reserve/unreserve | 17:25 |
mriedem | ildikov: yes | 17:26 |
mriedem | just more garbage that needs cleaning up | 17:26 |
hemna | https://github.com/openstack/cinder/blob/master/cinder/api/openstack/wsgi.py#L600 | 17:27 |
hemna | maybe that ? | 17:27 |
hemna | dunno | 17:27 |
mriedem | ok, so anyway, ildikov's check_attach patch needs some work | 17:30 |
mriedem | jgriffith: updates on your changes? | 17:30 |
ildikov | mriedem: as I see that patch you sent is taken care of, but maybe needs more eyes to figure things out | 17:30 |
jgriffith | mriedem: yeah, I pushed another update yesterday... | 17:30 |
jgriffith | mriedem: so there's some things to be fixed with migrate and unshelve updating correctly | 17:31 |
ildikov | mriedem: yeap, tnx for the review, I'll answer/fix this week | 17:31 |
jgriffith | mriedem: I also started working on the idea of ack'ing the create/terminate connection phases as an option | 17:31 |
jgriffith | mriedem: that interestingly started to make things look a lot like todays intialize--->attach :( | 17:32 |
mriedem | ildikov: _attach_volume_shelved_offloaded is also busted, it doesn't unreserve the volume in case attach fails | 17:32 |
jgriffith | mriedem: so I started an experiment to just overhaul/improve that type of flow (better names, not so many optional args) that uses attachment_ids and stores the connectors | 17:32 |
ildikov | mriedem: is there a bug report for that? we might track these issues, just to not get lost in this | 17:33 |
scottda | maybe we'd be better off with a cleanup_failed_attach API to unreserve regardless of state. | 17:33 |
jgriffith | mriedem: scottda are we talking about the same thing still? | 17:34 |
jgriffith | mriedem: scottda or is this another topic that you're discussion with ildikov ? | 17:34 |
jgriffith | discussing | 17:34 |
scottda | We were earlier talking about cleaning up failed attaches... | 17:34 |
scottda | But the call to unreserve only works if state is "attaching". So there were issues with ildikov 's patches to remove calls to reserve | 17:35 |
jgriffith | ok, carry on then | 17:35 |
*** smcginnis has joined #openstack-meeting-cp | 17:35 | |
* jgriffith goes back to what he was doing | 17:35 | |
mriedem | ildikov: i'll open a bug | 17:36 |
jgriffith | scottda: FWIW you could just change that :) | 17:36 |
scottda | jgriffith: Yeah, but the conditional update stuff to eliminate races would get busted. | 17:36 |
ildikov | jgriffith: we finished with that, just mriedem brought up one more example of the issue :) | 17:36 |
scottda | jgriffith: Anyway, didn't want to drag us back. You should carry on with your update. | 17:36 |
jgriffith | scottda: by allowing it to unreserve an "error-attaching"? I don't see how | 17:36 |
mriedem | jgriffith: have you had a clean ci run yet? | 17:36 |
jgriffith | mriedem: nope, down to eight failures | 17:37 |
mriedem | at least for the attach flow | 17:37 |
mriedem | ok, shelve is all sorts of weird | 17:37 |
mriedem | and not well tested probably | 17:37 |
jgriffith | mriedem: I've discovered that :) | 17:37 |
jgriffith | mriedem: there's something kinda funky when it calls os-brick | 17:37 |
jgriffith | will get to that today | 17:37 |
mriedem | yeah, 'save all of my vm state in the db and take care of it, but destroy my guest and don't charge me until i want it back' | 17:37 |
mriedem | super pet scenario | 17:38 |
jgriffith | mriedem: yeah! | 17:38 |
jgriffith | mriedem: frankly I've come to hate the *feature* :) | 17:38 |
mriedem | as have several of us | 17:38 |
jgriffith | mriedem: I particularly hate the concept of attaching a volume to a shelved instnace | 17:38 |
jgriffith | instance even :) | 17:38 |
mriedem | yes, that's what i was just looking at | 17:38 |
mriedem | it's not attached until unshelve | 17:38 |
*** sdake has quit IRC | 17:39 | |
jgriffith | mriedem: well in reality correct, but from Cinder's point of view :) | 17:39 |
mriedem | yeah | 17:39 |
mriedem | i'm not sure how well we handle detaching those when we delete a shelved offloaded instance either | 17:39 |
jgriffith | mriedem: anyway, I got most of it all worked out, just something in brick maybe privsep was puking | 17:39 |
mriedem | because i think that assumes terminate_connection will work | 17:39 |
jgriffith | mriedem: so that part is actually covered which is nice | 17:39 |
jgriffith | mriedem: it does a cleanup of everything in the BDM talbe | 17:40 |
jgriffith | table | 17:40 |
jgriffith | mriedem: oh that part :) | 17:40 |
jgriffith | mriedem: so my patch added some things, because we were cheating | 17:40 |
jgriffith | mriedem: my patch has a no_connect flag in the connector | 17:40 |
jgriffith | mriedem: so now nova and cinder both know that it's not actually being used yet | 17:41 |
mriedem | ok yeah i think we might have fixed this in the api anyway | 17:41 |
mriedem | yeah _get_stashed_volume_connector should return None | 17:41 |
mriedem | so we don't attempt to terminate the connection | 17:41 |
jgriffith | mriedem: exactlys | 17:41 |
jgriffith | mriedem: anyway, I'm working on an alternate proposal | 17:42 |
jgriffith | mriedem: I want to post this today or tomorrow and get some folks to weigh in on both of these options | 17:42 |
mriedem | ok | 17:42 |
jgriffith | mriedem: see what folks think is going to be best long term | 17:42 |
ildikov | jgriffith: that sounds good, thanks | 17:43 |
*** sdake has joined #openstack-meeting-cp | 17:44 | |
ildikov | jgriffith: when I checked your spec the last time I saw comments from johnthetubaguy | 17:44 |
jgriffith | ildikov: yes, the addition of a confirmation that nova finished the connection | 17:45 |
jgriffith | ildikov: My latest update added an acknowlodge option to the create_attachment call | 17:45 |
ildikov | jgriffith: yeap, we touched on that briefly today too | 17:46 |
jgriffith | ildikov: so you'd leave the state in attaching until Nova comes back and says "finalize" it | 17:46 |
ildikov | jgriffith: and if Nova never comes back it remains in attaching state? | 17:47 |
mriedem | isn't that basically what we have today with os-reserve and os-attach? | 17:47 |
mriedem | os-reserve reserves the volume for attach, and os-attach changes to in-use when we're done | 17:47 |
smcginnis | mriedem: We kind of realized that this morning. | 17:48 |
smcginnis | Or at least I did. | 17:48 |
ildikov | smcginnis: kinda same here, I didn't put those together in a flow until now | 17:49 |
mriedem | you need my expert 3rd party input | 17:50 |
mriedem | :P | 17:50 |
scottda | I think johnthetubaguy had pointed out a race last week... | 17:50 |
smcginnis | But this would still clean up what we have now. | 17:50 |
scottda | where the physical attach to the nova host failed. | 17:50 |
scottda | And in the shared-attach situation the cleanup would kill a new attachment that was shared to the same host. | 17:50 |
scottda | And that's why he thought this Ack would be useful. | 17:51 |
ildikov | mriedem: lol :) that we always need :) | 17:51 |
ildikov | scottda: shared-attach as multi-attach? | 17:53 |
scottda | ildikov: no, for those drivers that share a connection for >1 attach to the same host | 17:53 |
scottda | so, multi-attach is required. | 17:54 |
ildikov | scottda: ah, right, got it | 17:54 |
scottda | But it's specific to drivers | 17:54 |
jgriffith | sorry, bot pulled away :) | 17:54 |
jgriffith | but looks like you all have it | 17:54 |
scottda | If I had an intern, I would request that they write out the entire state machine for volume attach, including all the wierd corner cases. | 17:54 |
ildikov | jgriffith: slowly getting to the same page :) | 17:55 |
jgriffith | scottda: I've already done that :) | 17:55 |
jgriffith | scottda: the only *weird* cases are customizations in the drivers | 17:55 |
jgriffith | this stuff really isn't that hard, until you have 70+ backends to figure out | 17:56 |
scottda | jgriffith: By weird I also mean shelve_offload and nova live_migration | 17:56 |
ildikov | scottda: there are internship programs in OpenStack, so you can have one :) | 17:56 |
jgriffith | scottda: those aren't really that weird in the old sense though, other than the concept | 17:56 |
scottda | and also drivers that share connections to the same compute host. And multi-attach | 17:56 |
jgriffith | scottda: right... and that's a driver/backend complication | 17:57 |
ildikov | smcginnis: mriedem: BTW before we run out of time completely, I saw on the cross-project etherpad that our topic is a bit too specific for that track | 17:58 |
*** docaedo has quit IRC | 17:58 | |
smcginnis | :[ | 17:58 |
scottda | ildikov: Yes. It seems in the past that they prefer we do a cinder-nova sync for stuff like this. | 17:59 |
scottda | Just 2 projects doesn't seem to count for cross-project | 17:59 |
smcginnis | mriedem: I need to finalize the Cinder sessions yet. Do you have enough to dedicate one to cinder-nova, or should I look at doing that? | 17:59 |
ildikov | smcginnis: mriedem: I saw overlaps between the Nova and Cinder sessions, so I would guess we can pick one | 17:59 |
mriedem | smcginnis: we have plenty of room | 18:00 |
mriedem | smcginnis: i already penciled in a nova/cinder session friday morning | 18:00 |
mriedem | in the nova room | 18:00 |
smcginnis | mriedem: Awesome, thanks. | 18:00 |
ildikov | jgriffith: I would guess that with a bit simpler and generic API the specifics could be handled by the drivers | 18:00 |
jgriffith | ildikov: +1 | 18:00 |
mriedem | smcginnis: https://etherpad.openstack.org/p/ocata-nova-summit-ideas | 18:01 |
ildikov | mriedem: sounds great | 18:01 |
*** sdake has quit IRC | 18:01 | |
ildikov | mriedem: tnx!!! | 18:01 |
scottda | smcginnis: I think we wanted a cinder session for the new API that went before syncing with Nova, didn't we? | 18:01 |
ildikov | scottda: I think that would be beneficial | 18:01 |
mriedem | we're over time, | 18:01 |
smcginnis | scottda: Yeah, we had discussed that. Since that is Friday morning, shouldn't be a problem to fit it in somewhere. | 18:01 |
mriedem | but i'd like to come into the summit with, this is the cinder api(s), and how nova is going to use them | 18:01 |
ildikov | scottda: I felt earlier that there are still Cinder specific items where we don't have an agreement | 18:01 |
mriedem | so we can get consensus in person that those are the right things | 18:02 |
ildikov | mriedem: huge +1 | 18:02 |
ildikov | mriedem: do we need any info sharing before the summit with at least a rough plan? | 18:03 |
ildikov | mriedem: just that people have a basic idea of what we're planning to do and we could focus a bit more about some level details | 18:03 |
mriedem | umm | 18:03 |
ildikov | mriedem: | 18:03 |
mriedem | yeah probably | 18:04 |
ildikov | just an idea/question | 18:04 |
mriedem | i don't want to come into the summit blind | 18:04 |
mriedem | could we do a hangout next week? | 18:04 |
ildikov | mriedem: sure! | 18:04 |
mriedem | basically from a nova pov i'd like the cinder team saying, yup we're on board with the proposed api and think it will work and we understand why | 18:04 |
scottda | mriedem: yes to hangout | 18:05 |
mriedem | if there are warts or unknowns we should talk those through in a hangout | 18:05 |
smcginnis | jgriffith: Think you'd be in good shape to discuss patches in a hangout next week? | 18:05 |
scottda | mriedem: And we have been prodding the cinder team on this. We can put something on this week's agenda as well. | 18:05 |
jgriffith | sure | 18:05 |
ildikov | mriedem: would that be a Cinder-Nova team Hangout? or this team and we might stream it to a broader audience? | 18:06 |
smcginnis | I would think this team. | 18:06 |
smcginnis | To start | 18:06 |
ildikov | ok, we will figure out the info sharing later then | 18:06 |
ildikov | I can either put up a Doodle poll or we re-use the meeting slot for a Hangout call | 18:07 |
mriedem | let's just use this slot i think | 18:07 |
mriedem | if it works for people | 18:07 |
smcginnis | That should work for me. | 18:07 |
ildikov | I'm happy to do both, but reusing the slot is easier than finding another one that works for mostly everyone | 18:08 |
scottda | now is good | 18:08 |
ildikov | then I'll just post a Hangout link here at the meeting time next week | 18:08 |
scottda | or an hour ago, to be exact | 18:08 |
ildikov | scottda: 167 hours later to be super exact :) | 18:09 |
ildikov | is there anything else that we should discuss today? | 18:10 |
ildikov | ok, I tae it as a no :) | 18:11 |
ildikov | scottda: let's try to have this topic on the Cinder team's meeting agenda to get the team to the same page and move towards the inter-project agreements | 18:12 |
ildikov | and next Monday 1700UTC we will have a Hangouts call as opposed to the regular IRC meeting | 18:12 |
ildikov | thanks all!!! | 18:12 |
ildikov | #endmeeting | 18:13 |
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings" | 18:13 | |
openstack | Meeting ended Mon Oct 3 18:13:11 2016 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 18:13 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2016/cinder_nova_api_changes.2016-10-03-17.00.html | 18:13 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2016/cinder_nova_api_changes.2016-10-03-17.00.txt | 18:13 |
openstack | Log: http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2016/cinder_nova_api_changes.2016-10-03-17.00.log.html | 18:13 |
*** docaedo has joined #openstack-meeting-cp | 18:33 | |
*** harlowja has quit IRC | 18:38 | |
*** uxdanielle has quit IRC | 18:39 | |
*** harlowja has joined #openstack-meeting-cp | 18:39 | |
*** rarcea has quit IRC | 19:19 | |
*** sdake has joined #openstack-meeting-cp | 19:22 | |
*** david-lyle has joined #openstack-meeting-cp | 19:47 | |
*** piet_ has quit IRC | 19:53 | |
*** sdake has joined #openstack-meeting-cp | 20:05 | |
*** david-lyle has quit IRC | 20:30 | |
*** david-lyle has joined #openstack-meeting-cp | 20:35 | |
*** uxdanielle has joined #openstack-meeting-cp | 20:44 | |
*** david-lyle has quit IRC | 20:44 | |
*** Rockyg has joined #openstack-meeting-cp | 20:59 | |
*** piet_ has joined #openstack-meeting-cp | 21:17 | |
*** david-lyle has joined #openstack-meeting-cp | 21:19 | |
*** piet_ has quit IRC | 21:35 | |
*** hpe-hj has joined #openstack-meeting-cp | 21:40 | |
*** hj-hpe has quit IRC | 21:42 | |
*** piet_ has joined #openstack-meeting-cp | 21:45 | |
*** gouthamr has quit IRC | 21:49 | |
*** piet_ has quit IRC | 21:52 | |
*** xyang1 has quit IRC | 22:00 | |
*** piet_ has joined #openstack-meeting-cp | 22:01 | |
*** piet_ has quit IRC | 22:04 | |
*** piet_ has joined #openstack-meeting-cp | 22:04 | |
*** david-lyle has quit IRC | 22:09 | |
*** mriedem has quit IRC | 22:13 | |
*** piet_ has quit IRC | 22:15 | |
*** sdake has quit IRC | 22:16 | |
*** sdake has joined #openstack-meeting-cp | 22:33 | |
*** david-lyle has joined #openstack-meeting-cp | 22:36 | |
*** tongli has quit IRC | 22:57 | |
*** markvoelker has quit IRC | 23:29 | |
*** david-lyle has quit IRC | 23:44 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!