*** gouthamr has joined #openstack-meeting-cp | 00:27 | |
*** knangia has quit IRC | 00:51 | |
*** lifeless_ is now known as lifeless | 01:00 | |
*** gouthamr has quit IRC | 01:16 | |
*** raj_sing- has joined #openstack-meeting-cp | 01:46 | |
*** edtubill has joined #openstack-meeting-cp | 02:33 | |
*** edtubill has quit IRC | 02:57 | |
*** gouthamr has joined #openstack-meeting-cp | 03:00 | |
*** gouthamr has quit IRC | 03:19 | |
*** knangia has joined #openstack-meeting-cp | 03:54 | |
*** rderose has quit IRC | 04:33 | |
*** markvoelker has quit IRC | 05:08 | |
*** markvoelker has joined #openstack-meeting-cp | 05:09 | |
*** markvoelker has quit IRC | 05:13 | |
*** reed has quit IRC | 05:30 | |
*** reed has joined #openstack-meeting-cp | 05:30 | |
*** aunnam has joined #openstack-meeting-cp | 05:42 | |
*** knangia has quit IRC | 06:01 | |
*** sheeprine has quit IRC | 06:04 | |
*** tonyb_ has joined #openstack-meeting-cp | 06:06 | |
*** sheeprine has joined #openstack-meeting-cp | 06:08 | |
*** kbyrne_ has joined #openstack-meeting-cp | 06:09 | |
*** markvoelker has joined #openstack-meeting-cp | 06:09 | |
*** hemna_ has joined #openstack-meeting-cp | 06:12 | |
*** tonyb has quit IRC | 06:13 | |
*** kbyrne has quit IRC | 06:13 | |
*** hemna has quit IRC | 06:13 | |
*** kbyrne_ is now known as kbyrne | 06:13 | |
*** markvoelker has quit IRC | 06:13 | |
*** sheeprine has quit IRC | 07:02 | |
*** reed has quit IRC | 07:03 | |
*** dmellado has quit IRC | 07:03 | |
*** reed has joined #openstack-meeting-cp | 07:04 | |
*** dmellado has joined #openstack-meeting-cp | 07:08 | |
*** sheeprine has joined #openstack-meeting-cp | 07:09 | |
*** reed has quit IRC | 07:17 | |
*** reed has joined #openstack-meeting-cp | 07:19 | |
*** markvoelker has joined #openstack-meeting-cp | 08:10 | |
*** rarcea has joined #openstack-meeting-cp | 08:12 | |
*** markvoelker has quit IRC | 08:14 | |
*** MarkBaker has quit IRC | 08:20 | |
*** david-lyle has quit IRC | 09:11 | |
*** david-lyle has joined #openstack-meeting-cp | 09:13 | |
*** david-lyle_ has joined #openstack-meeting-cp | 09:27 | |
*** david-lyle has quit IRC | 09:27 | |
*** MarkBaker has joined #openstack-meeting-cp | 10:04 | |
*** markvoelker has joined #openstack-meeting-cp | 10:11 | |
*** markvoelker has quit IRC | 10:16 | |
*** MarkBaker has quit IRC | 10:19 | |
*** reed has quit IRC | 11:26 | |
*** reed has joined #openstack-meeting-cp | 11:33 | |
*** raj_sing- has quit IRC | 11:36 | |
*** aunnam has quit IRC | 11:36 | |
*** MarkBaker has joined #openstack-meeting-cp | 11:40 | |
*** markvoelker has joined #openstack-meeting-cp | 12:13 | |
*** markvoelker has quit IRC | 12:17 | |
*** markvoelker has joined #openstack-meeting-cp | 12:46 | |
*** raj_singh has quit IRC | 12:56 | |
*** raj_singh has joined #openstack-meeting-cp | 12:57 | |
*** MarkBaker has quit IRC | 13:19 | |
*** MarkBaker has joined #openstack-meeting-cp | 13:20 | |
*** gouthamr has joined #openstack-meeting-cp | 13:21 | |
*** gouthamr has quit IRC | 14:44 | |
*** hemna_ is now known as hemna | 14:50 | |
*** MarkBaker has quit IRC | 15:07 | |
*** ttx has quit IRC | 15:44 | |
*** ttx has joined #openstack-meeting-cp | 15:46 | |
*** knangia has joined #openstack-meeting-cp | 15:46 | |
*** aunnam has joined #openstack-meeting-cp | 15:53 | |
*** lamt has joined #openstack-meeting-cp | 15:56 | |
*** david-lyle_ is now known as david-lyle | 15:58 | |
*** MarkBaker has joined #openstack-meeting-cp | 16:05 | |
*** mriedem has joined #openstack-meeting-cp | 16:55 | |
ildikov | #startmeeting cinder-nova-api-changes | 17:00 |
---|---|---|
openstack | Meeting started Thu Mar 23 17:00:14 2017 UTC and is due to finish in 60 minutes. The chair is ildikov. Information about MeetBot at http://wiki.debian.org/MeetBot. | 17:00 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 17:00 |
*** openstack changes topic to " (Meeting topic: cinder-nova-api-changes)" | 17:00 | |
openstack | The meeting name has been set to 'cinder_nova_api_changes' | 17:00 |
ildikov | DuncanT ameade cFouts johnthetubaguy jaypipes takashin alaski e0ne jgriffith tbarron andrearosa hemna erlon mriedem gouthamr ebalduf patrickeast smcginnis diablo_rojo gsilvis xyang1 raj_singh lyarwood breitz | 17:00 |
jungleboyj | o/ | 17:00 |
smcginnis | o/ | 17:00 |
jungleboyj | smcginnis: Jinx | 17:00 |
mriedem | o/ | 17:00 |
jungleboyj | ildikov: Can you add me to your ping list please? | 17:01 |
ildikov | jungleboyj: sure | 17:01 |
johnthetubaguy | o/ | 17:01 |
jungleboyj | Thank you. | 17:01 |
ildikov | jungleboyj: done | 17:01 |
ildikov | hi all | 17:01 |
ildikov | let's start :) | 17:01 |
ildikov | so we have lyarwood's patches out | 17:02 |
ildikov | I saw debates on get_attachment_id | 17:02 |
ildikov | is there a chance to get to a conclusion on what to do with that? | 17:03 |
ildikov | johnthetubaguy: did you have a chance to catch mdbooth? | 17:03 |
mdbooth | He did | 17:03 |
johnthetubaguy | yeah, I think we are on the same page | 17:04 |
johnthetubaguy | it shouldn't block anything | 17:04 |
mriedem | i'm +2 on the bdm.attachment_id changes | 17:04 |
mriedem | or was | 17:04 |
johnthetubaguy | yeah, I am basically +2 as well, but mdbooth suggested waiting to see the first patch that reads the attacment_id, which I am fine with | 17:04 |
ildikov | mdbooth: johnthetubaguy: what's the conclusion? | 17:04 |
johnthetubaguy | basically waiting to get the patch on top of that that uses the attachment id | 17:05 |
johnthetubaguy | which is the detach volume patch, I think | 17:05 |
* mdbooth thought they looked fine, but saw no advantage merging them until they're used. Merging early risks a second db migration if we didn't get it right first time. | 17:05 | |
* mdbooth is uneasy about not having an index on attachment_id, but can't be specific about that without seeing a user. | 17:06 | |
*** MarkBaker has quit IRC | 17:06 | |
ildikov | is it a get_by_attachment_id that the detach patch uses? I think it's not | 17:06 |
ildikov | it seems like we're getting ourselves into a catch 22 with two cores on the +2 side | 17:07 |
mriedem | we need jgriffith's detach patch restored and rebased | 17:07 |
jgriffith | sorry... late | 17:07 |
mriedem | https://review.openstack.org/#/c/438750/ | 17:07 |
mriedem | https://review.openstack.org/#/c/438750/2/nova/compute/manager.py | 17:08 |
mriedem | that checks if the bdm.attachment_id is set and makes the detach call decision based on that | 17:08 |
mriedem | we list the BDMs by instance, not attacment id | 17:08 |
mriedem | but we check the attachment_id to see if we detach with the new api or old api | 17:08 |
johnthetubaguy | mriedem: +1 | 17:08 |
jgriffith | mriedem that's wrong :) | 17:09 |
ildikov | mriedem: but that will not get the get_by_attachment_id usage for us | 17:09 |
mriedem | ildikov: we don't need get_by_attachment_id | 17:09 |
mriedem | jgriffith: why is that wrong? | 17:09 |
jgriffith | mriedem never mind | 17:10 |
ildikov | mriedem: I thought that's why we hold the attachment_id patches | 17:10 |
jgriffith | mriedem I don't think I'm following your statement... going through it again | 17:10 |
mriedem | jgriffith: so are you planning to restore and rebase that on top of https://review.openstack.org/#/c/437665/ | 17:10 |
mriedem | ildikov: that's why mdbooth wanted to hold them, | 17:10 |
jgriffith | mriedem when it looks like it's going to land yes | 17:10 |
mriedem | which i don't agree with | 17:11 |
jgriffith | mriedem wait... don't agree with what? | 17:11 |
mriedem | jgriffith: i'm ready to go with those bdm.attachment_id changes | 17:11 |
mriedem | sorry, | 17:11 |
mriedem | i don't agree that we need to hold the bdm.attachment_id changes | 17:11 |
ildikov | mriedem: I'm on that page too | 17:11 |
mriedem | they are ready to ship | 17:11 |
mriedem | https://review.openstack.org/#/c/437665/ and below | 17:11 |
ildikov | johnthetubaguy: ^^? | 17:11 |
jgriffith | mriedem then yes, I'll do that today, I woul dLOVE to do it after it merges if possible :) | 17:11 |
mdbooth | mriedem: I don't necessarily disagree, btw. They're just not being used yet, so I thought we might as well merge them later. | 17:11 |
jgriffith | so in an hour or so :) | 17:11 |
mriedem | mdbooth: they aren't being used b/c jgriffith is waiting for us to merge them | 17:12 |
mriedem | hence the catch-22 | 17:12 |
mriedem | so yeah we agree, let's move forward | 17:12 |
mdbooth | mriedem: Does jgriffith have a nova patch which uses them? | 17:12 |
mdbooth | These aren't exposed externally yet. | 17:12 |
jgriffith | mdbooth I did yes | 17:12 |
ildikov | mriedem: thanks for confirming me :) | 17:12 |
mriedem | mdbooth: https://review.openstack.org/#/c/438750/ will yes | 17:12 |
johnthetubaguy | I could go either way, merge now, or rebase old patch on top. Doing neither I can't subscribe to. | 17:12 |
ildikov | johnthetubaguy: mriedem: let's ship it | 17:13 |
ildikov | johnthetubaguy: mriedem: one item done, move to the next one | 17:13 |
* mdbooth is confused. That patch can't use attachment_id because it passed tests. | 17:13 | |
mdbooth | And it doesn't 'have attachment_id | 17:13 |
mriedem | mdbooth: jgriffith and lyarwood originally wrote duplicate changes | 17:14 |
mriedem | at the PTG | 17:14 |
mriedem | or shortly after | 17:14 |
mriedem | jgriffith's patch ^ relied on his version of bdm.attachment_uuid | 17:14 |
ildikov | mdbooth: to be honest the detach flow currently built on the assumption we don't have attachment_id set and therefore it uses the old detach flow | 17:14 |
mriedem | we dropped those and went with lyarwood's | 17:14 |
mdbooth | Ok :) | 17:14 |
mriedem | so jgriffith will have to rebase and change attachment_uuid to attachment_id in his change | 17:14 |
mriedem | what's next? | 17:15 |
ildikov | mriedem: so we ship or hold now? | 17:15 |
mriedem | we merge lyarwood's bdm.attachment_id changes | 17:16 |
ildikov | mriedem: awesome, thanks | 17:16 |
hemna | sweet | 17:16 |
mriedem | i plan on going over johnthetubaguy's spec again this afternoon but honestly, | 17:16 |
mriedem | at this point, barring any calamity in there, | 17:16 |
mriedem | i'm going to just approve it | 17:16 |
jungleboyj | mriedem: +2 | 17:16 |
ildikov | mriedem: I would just ship that too | 17:16 |
ildikov | mriedem: looks sane to me and we cannot figure everything out on paper anyhow | 17:17 |
johnthetubaguy | we all agree the general direction now, so yeah, lets steam ahead and deal with the icebergs as they come | 17:17 |
ildikov | johnthetubaguy: +1 | 17:17 |
ildikov | we need to get the cinderclient version discovery patch back to life | 17:18 |
* ildikov looks at jgriffith :) | 17:18 | |
jgriffith | ildikov johnthetubaguy so my understanding is you guys wanted that baked in to the V3 detach patch | 17:18 |
jgriffith | so I plan on that being part of the refactor against lyarwood 's changes | 17:19 |
mriedem | i'm not sure we need that, | 17:19 |
mriedem | if the bdm has attachment_id set, | 17:19 |
mriedem | it means we attached with new enough cinder | 17:19 |
mriedem | so on detach, if we have bdm.attachment_id, i think we assume cinder v3 is there and just call the delete_attachment method or whatever | 17:19 |
johnthetubaguy | so I am almost +2 on that refactor patch, FWIW, there is just a tiny nit left | 17:19 |
ildikov | mriedem: we need that some time before implementing attach | 17:20 |
mriedem | the version checking happens in the attach method | 17:20 |
mriedem | s/method/patch/ | 17:20 |
jgriffith | mriedem that works | 17:20 |
mriedem | ildikov: | 17:20 |
mriedem | yes it comes before the attach patch | 17:20 |
mriedem | which is the end | 17:20 |
ildikov | mriedem: ok, then we're kind of on the same page I think | 17:20 |
johnthetubaguy | yeah, attach is version negotiation time, everyone else just checks attachment_id to see if they have to use the new version or not | 17:20 |
jgriffith | and just hope something with the stupid migrate or HA stuff didn't happen :) | 17:20 |
mriedem | well, i think we'd get a 400 back if the microversion isn't accepted | 17:21 |
mriedem | and we fail the detach | 17:21 |
jgriffith | mriedem correct | 17:21 |
mriedem | we can make the detach code smarter wrt version handling later if needed | 17:21 |
johnthetubaguy | thats all legit, seems fine | 17:21 |
jgriffith | mriedem and I can revert to the old call in the cinder method itself | 17:21 |
mriedem | i just wouldn't worry about that right now | 17:21 |
johnthetubaguy | no revert, just fail hard | 17:21 |
mriedem | right hard fail | 17:21 |
jgriffith | ummm... ok, you's the bosses | 17:21 |
mriedem | we want to eventually drop the old calls, | 17:22 |
johnthetubaguy | so we only get in that mess if you downgraded Cinder after creating the attachment | 17:22 |
mriedem | so once we attach new path, we detach new path | 17:22 |
mriedem | or fail | 17:22 |
mriedem | yeah if you downgrade cinder, well, | 17:22 |
mriedem | that's on you | 17:22 |
johnthetubaguy | right, dragons | 17:22 |
ildikov | johnthetubaguy: if someone does that then I would guess they might have other issues too anyhow... | 17:22 |
johnthetubaguy | ildikov: right, its just not... allowed | 17:23 |
jungleboyj | johnthetubaguy: I think that is a very unlikely edge case. | 17:23 |
jungleboyj | Something has really gone wrong in that case. | 17:23 |
mriedem | ok what's next? | 17:23 |
ildikov | certainly not something we would want to support or encourage people to do | 17:23 |
johnthetubaguy | ... so we fail hard in detach, if attachment_id is present, but the microversion request goes bad, simples | 17:23 |
mriedem | i think the detach patch is the goal for next week yes? | 17:23 |
ildikov | I think we need to get detach working | 17:24 |
mriedem | is there anything else? | 17:24 |
johnthetubaguy | the refactor is the question | 17:24 |
mriedem | this? https://review.openstack.org/#/c/439520/ | 17:24 |
johnthetubaguy | we could try merge lyarwood's refactor first, I think its crazy close to being ready after mdbooth's hard work revieing it | 17:24 |
johnthetubaguy | thats the one | 17:25 |
ildikov | johnthetubaguy: ah, right, the detach refactor | 17:25 |
johnthetubaguy | I think it will make the detach patch much simpler if we get the refactor in first | 17:25 |
johnthetubaguy | mdbooth: whats your take on that ^ | 17:25 |
mriedem | ok i haven't looked at the refactor at all yet | 17:25 |
* mdbooth reads | 17:26 | |
johnthetubaguy | to be clear, its just the above patch, and not the follow on patch | 17:26 |
mriedem | jgriffith's detach patch is pretty simple, it's just a conditional on the bdm.attachment_id being set or not | 17:26 |
ildikov | johnthetubaguy: I think lyarwood is out of office this week, so if that's the only thing missing we can add that and get this one done too | 17:26 |
johnthetubaguy | mriedem: hmm, I should double check why I thought it would be a problem | 17:27 |
mdbooth | I haven't had a chance to look at lyarwood's updated refactor in detail yet (been buried downstream for a few days), but when I glanced at it it looked like a very simple code motion which brings the code inline with attach. | 17:27 |
mdbooth | It looked low risk and obvious (barring bugs). Don't think I've seen the other patches. | 17:28 |
johnthetubaguy | so, I remember now... | 17:28 |
johnthetubaguy | https://review.openstack.org/#/c/438750/2/nova/compute/manager.py | 17:28 |
johnthetubaguy | we have many if statements | 17:28 |
johnthetubaguy | I think after lyarwood's patch we have many less, or at least, they are in a better place | 17:29 |
johnthetubaguy | if thats not the case, then I think the order doesn't matter | 17:29 |
mdbooth | johnthetubaguy: Right. All the mess is in once place. | 17:30 |
mdbooth | Centralised mess > distributed mess | 17:30 |
johnthetubaguy | yeah, mess outside the compute manager | 17:30 |
mriedem | johnthetubaguy: yeah that's true | 17:30 |
mriedem | ok so i'm cool with that | 17:30 |
mriedem | if you guys think the refactor is close | 17:30 |
ildikov | +1 | 17:31 |
johnthetubaguy | so in summary, I guess that means jgriffith to rebase on top of https://review.openstack.org/#/c/439520 | 17:32 |
johnthetubaguy | ? | 17:32 |
mriedem | johnthetubaguy: mdbooth: are one of you going to address the remaining nit? | 17:32 |
mriedem | it's just defining the method in the base class right? | 17:33 |
mriedem | we should be using ABCs there...but that's another story | 17:33 |
mdbooth | mriedem: It's open in my browser. I'll probably look in the morning. | 17:33 |
mriedem | mdbooth: ok | 17:33 |
mriedem | otherwise i'll address johnthetubaguy's comment | 17:33 |
mriedem | and he can +2 if he's happy | 17:33 |
johnthetubaguy | I was going to do that now | 17:33 |
mriedem | ok | 17:33 |
mriedem | yes so let's rebase https://review.openstack.org/#/c/438750/ on top of https://review.openstack.org/#/c/439520 | 17:34 |
mriedem | and the goal for next week is detach being done | 17:34 |
johnthetubaguy | #info let's rebase https://review.openstack.org/#/c/438750/ on top of https://review.openstack.org/#/c/439520 and the goal for next week is detach being done | 17:34 |
jgriffith | mriedem johnthetubaguy thanks, that the one I pointed out this morning when I tried to hijack your other meeting :) | 17:34 |
ildikov | mriedem: johnthetubaguy: +1, thanks | 17:35 |
ildikov | ok, it seems we're done for today | 17:36 |
ildikov | anything else from anyone? | 17:36 |
mriedem | ok, so let's be done with multiattach talk for this meeting. | 17:36 |
mriedem | yes | 17:36 |
mriedem | at the ptg, we talked about a cinder imagebackend in nova | 17:36 |
mriedem | mdbooth is interested in that as well, | 17:36 |
mriedem | but we need a spec, | 17:36 |
mriedem | so i'm wondering if mdbooth or jgriffith would be writing a spec | 17:36 |
jgriffith | mriedem so I have some grievances with that approach :) | 17:37 |
smcginnis | Time for the airing of grievances. | 17:37 |
mriedem | air them out | 17:37 |
jgriffith | bring out the aluminum pole | 17:37 |
mriedem | i'll get the festivus pole | 17:37 |
smcginnis | ;) | 17:37 |
jgriffith | Here's the thing... I know you all hate flavor extra-specs, but lord it's simple | 17:38 |
mdbooth | jgriffith: I missed the PTG unfortunately. Do you want to have a detailed chat tomorrow? | 17:38 |
jgriffith | more importantly, there's a side effect of the cinder image-backend that I didn't think of | 17:38 |
jgriffith | it's all or nothing | 17:38 |
jgriffith | so you do that and you loose the ability to do ephemerals if you wanted to | 17:38 |
mriedem | i thought your whole use case was "i have cinder for storage, i want to use it for everything" | 17:38 |
mriedem | "i don't have any local storage on my computes" | 17:38 |
jgriffith | which meh... ok, I could work around that | 17:39 |
jgriffith | That's one use case yes | 17:39 |
johnthetubaguy | is this per compute node? I think thats OK? | 17:39 |
mriedem | yes it's per compute node | 17:39 |
mriedem | so i think that's fine | 17:39 |
jgriffith | so that's my question :) | 17:39 |
jgriffith | that addresses my customers that have ceph... | 17:39 |
mriedem | if you really want ephemeral, then you have hosts with ephemeral | 17:39 |
johnthetubaguy | thats totally fine right, some flavors go to a subset of the boxes | 17:39 |
mriedem | and put them in aggregates | 17:39 |
mriedem | right | 17:39 |
jgriffith | BUT, how do I schedule... because it seems you don't like scheduler hints either ;) | 17:40 |
mriedem | you have your cheap flavors go to the ephemeral aggregates | 17:40 |
mriedem | and your "enterprise" flavors go to the cinder-backend hosts | 17:40 |
jgriffith | mriedem so we use flavors? | 17:40 |
mriedem | jgriffith: the flavors tie to the aggregates | 17:40 |
mriedem | yes | 17:40 |
jgriffith | isn't that how I started this? | 17:40 |
mriedem | no | 17:40 |
johnthetubaguy | jgriffith: the approach we are talking about uses the normal APIs, no hints required | 17:40 |
mriedem | you started by baking the bdm into extra specs which we're not doing | 17:40 |
mriedem | right, this is all normal stuff | 17:40 |
jgriffith | mriedem flavor extra-specs TBC | 17:41 |
mdbooth | Incidentally, I've been trying to get the virtuozzo folks to implement this for their stuff: I'd like imagebackend to be stored in metadata attached to the bdm | 17:41 |
mdbooth | To me, this is the most sane way for this stuff to co-exist | 17:41 |
mriedem | jgriffith: but standard extra specs that are tied to aggregate metadata | 17:41 |
mdbooth | Having it hard-coded in nova.conf is extremely limiting | 17:41 |
mriedem | we also want the cinder imagebackend for other storage drivers, like scaleio | 17:41 |
mriedem | and i'm sure veritas at some point | 17:42 |
johnthetubaguy | mdbooth: so there is a thing where we should support multiple ephemeral backends on a single box at some point, and we totally need that, I just don't think thats this (I could be totally wrong) | 17:42 |
mdbooth | I believe there are going to be mixed use cases. For example, HPC where you want some of your storage to be local and really fast, even at the cost of persistence. | 17:43 |
mriedem | we talked about multiple ephemeral backends at the ptg didn't we? | 17:43 |
mriedem | the vz guys were talking about something like that, but i might be misremembering the use case | 17:43 |
mdbooth | Well the cinder imagebackend would be all or nothing as it stands | 17:43 |
mdbooth | Because it's always all or nothing right now | 17:44 |
johnthetubaguy | so the HPC folks care about resource usage being optimal, so they would segregate the workloads that want SSD, and just use volumes for the extra stroage, I think... | 17:44 |
mriedem | right, | 17:44 |
johnthetubaguy | but thats probably a rat hole we might want to dig out of | 17:44 |
mdbooth | mriedem: Yes, the vz guys want to add a new hard-coded knob to allow multiple backend | 17:44 |
mriedem | i think if you have different classes of storage needs you have aggregates and flavors | 17:44 |
mdbooth | I think that's just adding mess to mess | 17:44 |
johnthetubaguy | jgriffith: do you still support local volumes? | 17:44 |
mriedem | johnthetubaguy: they don't | 17:44 |
mriedem | that came up again in the ML recently | 17:44 |
johnthetubaguy | ah, that got taken out | 17:44 |
mriedem | i can dig up the link | 17:45 |
jgriffith | johnthetubaguy I don't know what you mean | 17:45 |
mriedem | LocalBlockDevice | 17:45 |
mriedem | or whatever cinder called it | 17:45 |
johnthetubaguy | yeah, like a qcow2 on local filesystem | 17:45 |
jgriffith | Oh! | 17:45 |
jgriffith | No, we removed that | 17:45 |
jgriffith | I'm happy to entertain a better way to do it, but not for Pike | 17:45 |
smcginnis | Technically, it's still there and deprecated and will be removed in Q. | 17:45 |
johnthetubaguy | so... I kinda see a world where all storage is in Cinder only, once we add that back in (properly) | 17:45 |
jgriffith | when I say entertain, I mean implement | 17:45 |
jgriffith | I already know how to do it | 17:45 |
johnthetubaguy | but the cinder ephemeral backend seems a good starting place | 17:46 |
jgriffith | johnthetubaguy I agree with that | 17:46 |
smcginnis | johnthetubaguy: +1 | 17:46 |
mdbooth | +1 | 17:46 |
jgriffith | I'm just concerned WRT if we'll get there or not | 17:46 |
johnthetubaguy | so if we don't write the spec, we certain not to get there | 17:46 |
mriedem | http://lists.openstack.org/pipermail/openstack-dev/2016-September/104479.html is the thread | 17:46 |
jgriffith | Looking at the *tricks* some backends have used to leverage Nova Instances makes it disturbing | 17:46 |
jgriffith | johnthetubaguy touchet | 17:47 |
johnthetubaguy | jgriffith: I had to say that, the urge was too strong :) | 17:47 |
jgriffith | I was hoping to make one last ditch effort at a plea for flavor extra-specs... but I see that ship has certainly sailed and I fell off the pier | 17:47 |
johnthetubaguy | mriedem: thanks thats the one | 17:47 |
mriedem | http://lists.openstack.org/pipermail/openstack-dev/2017-February/112847.html was picking it back up during the PTG | 17:47 |
mriedem | i've also asked jaypipes to chime in on that thread, | 17:47 |
jgriffith | johnthetubaguy I would've thought less of you if you hadn't said it ;) | 17:47 |
mriedem | because it came up in the nova channel again yesterday | 17:48 |
mriedem | and the answer was traits and placement, | 17:48 |
mriedem | but i'm not sure how that works yet in that scenario | 17:48 |
mriedem | i need the answer to this http://lists.openstack.org/pipermail/openstack-dev/2017-February/112903.html | 17:49 |
johnthetubaguy | mriedem: yeah, we get into ensemble scheduler that was proposed at one point, but lets not go there, but worth finding the notes on that | 17:49 |
mriedem | i think the answer during the PTG was, | 17:50 |
mriedem | you have a resource provider aggregate, | 17:50 |
mriedem | and the compute and storage pool are in the aggregate, | 17:50 |
johnthetubaguy | mriedem: I think we need the aggregate to have both resource providers in them | 17:50 |
johnthetubaguy | and a resource type that matches the volume provider | 17:50 |
mriedem | and there is a 'distance' attribute on the aggregate denoting the distance between providers, | 17:50 |
mriedem | and if you want compute and storage close, then you request the server with distance=0 | 17:50 |
johnthetubaguy | so there is a spec about mapping out the DC "nearness" in aggregates | 17:50 |
jgriffith | good lord | 17:51 |
johnthetubaguy | we can't escape that one forever | 17:51 |
jgriffith | Look.. there's another idea you might want to consider | 17:51 |
mriedem | johnthetubaguy: i think that's essentially what jay told me this would be | 17:51 |
jgriffith | We land the cinder image-backend | 17:51 |
johnthetubaguy | mriedem: yeah, sounds like the same plan | 17:51 |
jgriffith | Have a cinder-agent or full cinder-volume service on each compute node... | 17:51 |
jgriffith | If you want something fast, place on said node with cinder capacity, skip the iscsi target and use it | 17:52 |
jgriffith | but the fact is, all this talk about "iscsi isn't fast enough" has proven false in most cases | 17:52 |
johnthetubaguy | we are really just talking about how that gets modelled in the scheduler, we are kinda 60% there right now I think | 17:52 |
jgriffith | johnthetubaguy I know, but i"m suggesting before doing the other 40% we consider an alternative | 17:52 |
mriedem | should we suggest this as a forum topic? | 17:53 |
johnthetubaguy | jgriffith: depends on how much you spend on networking vs the speed you want, agreed its possible right now | 17:53 |
mriedem | the "i want to use cinder for everything" thing has come up a few different ways | 17:53 |
johnthetubaguy | mriedem: we might want to draw this up a bit more crisply first, but yeah, I think we should | 17:53 |
jgriffith | mriedem I'm willing to try and discuss it | 17:53 |
jgriffith | mriedem but honestly if it's like other topics that's already decided I don't want to waste anyones time, including my own | 17:53 |
* johnthetubaguy sits on his hands so he doesn't say he will try to write this up | 17:53 | |
mriedem | well the point of the forum session is to get ops and users in the room too | 17:54 |
jungleboyj | mriedem: ++ | 17:54 |
mriedem | anyway, i can take a todo to try and flesh out something for that | 17:54 |
johnthetubaguy | mriedem: I am happy to review that | 17:54 |
johnthetubaguy | so the baby step towards that is Cinder backend ephemeral driver | 17:54 |
mriedem | yeah | 17:55 |
johnthetubaguy | which helps the "diskless" folks | 17:55 |
jgriffith | johnthetubaguy the good thing is that certainly does open up many more long term possibilities | 17:55 |
johnthetubaguy | it seems aligned with our strategic direction | 17:55 |
jungleboyj | There has been enough request around the idea that we should keep trying to move something forward. | 17:55 |
* johnthetubaguy puts down management books | 17:55 | |
jgriffith | jungleboyj which request? | 17:56 |
jgriffith | or... I mean "which idea" | 17:56 |
mriedem | using cinder for everything | 17:56 |
jgriffith | local disk? Ephemeral Cinder? Cinder image-backend | 17:56 |
mriedem | always boot from volume | 17:56 |
jgriffith | mriedem thank you | 17:56 |
mriedem | 3 minutes left | 17:57 |
jgriffith | I get distracted easily | 17:57 |
jungleboyj | Ephemeral Cinder | 17:57 |
johnthetubaguy | root_disk = -1, etc | 17:57 |
johnthetubaguy | so next steps... | 17:57 |
johnthetubaguy | forum session, and nova spec on cinder ephemeral back end? | 17:57 |
johnthetubaguy | mriedem takes the first one | 17:57 |
jungleboyj | So, the plan is that the question of being able to co-locate volumes and instances will be handled by the placement service? | 17:57 |
johnthetubaguy | the second one? | 17:57 |
mriedem | yup | 17:57 |
mriedem | jungleboyj: that will be part of the forum discussion i think | 17:58 |
jungleboyj | mriedem: Great | 17:58 |
mriedem | who's going to start the nova spec for the cinder image backend? | 17:58 |
jgriffith | mriedem I'll do it | 17:58 |
jgriffith | I promise | 17:58 |
mriedem | ok, work with mdbooth | 17:58 |
mriedem | he's our imagebackend expert | 17:58 |
jgriffith | mdbooth Lucky you :) | 17:58 |
johnthetubaguy | I want to review both of those things when they exist | 17:58 |
jgriffith | mdbooth I meant that you get to work with me, not an enviable task | 17:59 |
mdbooth | jgriffith: Hehe | 17:59 |
johnthetubaguy | #action mriedem to raise forum session about cinder managing all storage, scheduling, etc | 18:00 |
johnthetubaguy | #action jgriffith to work with mdbooth on getting a spec together for the cinder image backend | 18:00 |
ildikov | and a timeout warning | 18:00 |
johnthetubaguy | something like that? | 18:00 |
mriedem | yes | 18:00 |
mriedem | works for me | 18:00 |
johnthetubaguy | sweet | 18:00 |
mriedem | times up | 18:00 |
johnthetubaguy | I guess we are done done | 18:00 |
jungleboyj | :-) | 18:00 |
ildikov | awesome, thank you all! | 18:01 |
smcginnis | thansk | 18:01 |
ildikov | great meeting today :) | 18:01 |
ildikov | see you next week :) | 18:01 |
ildikov | #endmeeting | 18:01 |
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings" | 18:01 | |
openstack | Meeting ended Thu Mar 23 18:01:26 2017 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 18:01 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2017/cinder_nova_api_changes.2017-03-23-17.00.html | 18:01 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2017/cinder_nova_api_changes.2017-03-23-17.00.txt | 18:01 |
openstack | Log: http://eavesdrop.openstack.org/meetings/cinder_nova_api_changes/2017/cinder_nova_api_changes.2017-03-23-17.00.log.html | 18:01 |
*** mriedem has left #openstack-meeting-cp | 18:01 | |
jungleboyj | Thanks ildikov ! | 18:01 |
*** rarcea has quit IRC | 20:08 | |
*** sdague has quit IRC | 21:10 | |
*** lamt has quit IRC | 21:10 | |
*** lamt has joined #openstack-meeting-cp | 21:12 | |
*** MarkBaker has joined #openstack-meeting-cp | 21:28 | |
*** lamt has quit IRC | 21:28 | |
*** scottda has quit IRC | 22:04 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!