14:00:23 <alaski> #startmeeting Nova
14:00:25 <openstack> Meeting started Thu Dec 11 14:00:23 2014 UTC and is due to finish in 60 minutes.  The chair is alaski. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:29 <openstack> The meeting name has been set to 'nova'
14:00:31 <gilliard> Hello!
14:00:32 <alaski> ping mikal tjones cburgess jgrimm adrian_otto funzo mjturek jcook ekhugen irina_pov krtaylor danpb alexpilotti flip214 raildo jaypipes gilliard garyk
14:00:35 <_gryf> o/
14:00:36 <bauzas> \o
14:00:38 <edleafe> o/
14:00:42 <kaisers1> o/
14:00:43 <n0ano> o/
14:00:45 <jaypipes> o/ .5
14:00:52 <garyk> ack
14:00:54 <alaski> Hi everyone!
14:01:13 <sajeesh> hi all
14:01:18 <oomichi> hi
14:01:23 <alaski> #topic Kilo Specs
14:01:35 <alaski> Two reminders
14:01:44 <alaski> Tomorrow is a spec review day
14:01:49 <bauzas> _o/
14:01:51 <bauzas> \o/
14:02:02 <alaski> so please take some time to review what you can
14:02:15 <garyk> alaski: is there a list of specs that have higher priorities, or should we just go ahead an review what we can?
14:02:16 <alaski> And the spec approval deadline is December 18th
14:02:22 <bauzas> maybe mentioning priorities would be worth it ?
14:02:31 <bauzas> for spec reviewal ?
14:02:39 <alaski> There are some priorities at https://etherpad.openstack.org/p/kilo-nova-priorities-tracking
14:03:06 <garyk> if that had links to relevant specs then it would be a nice optimization for our time
14:03:33 <bauzas> garyk: some subteams did, some not
14:03:41 <alaski> There are some links there, but priority owners should double check that before tomorrow
14:03:46 <garyk> cool
14:03:47 <bauzas> +1
14:03:54 <ramineni1> hi , have a question on spec reviews. can i ask now?
14:04:17 <garyk> sure, please go ahead
14:04:19 <alaski> ramineni1: if it's about the process sure.  if it's about a specific spec we have a slot for that
14:04:47 <ramineni1> alaski: its about specifc spec, will wait for that.thanks
14:04:59 <alaski> cool
14:05:17 <alaski> Also a reminder that the feature freeze for non-priority spec work is February 5th
14:05:37 <bauzas> alaski: FPF ?
14:05:45 <bauzas> alaski: or Feature Freeze ?
14:05:50 <sdague> feaure freeze
14:05:52 <alaski> Does anyone have any specs looking for fast track approval?
14:06:00 <alaski> yes, feature freeze
14:06:04 <bauzas> ack
14:06:30 <bauzas> alaski: should we bother you with links to specs having one +2 ?
14:06:32 <garyk> alaski: i am not sure if this is a candidate - https://review.openstack.org/#/c/127283/6/specs/kilo/approved/vmware-webmks-console.rst
14:07:03 <alaski> bauzas: maybe bring it up under stuck reviews, or open discussion
14:07:19 <alaski> garyk: looking real quick
14:07:50 <alaski> It mentions adding a new API so my first inclination is no
14:07:52 <bauzas> alaski: yeah, I asked that because of your previous comment about fast track approval
14:08:23 <alaski> bauzas: my bad.  I meant fast track re-approval
14:08:40 <alaski> Also spec free blueprint candidates
14:08:52 <garyk> alaski: in that case it is not relevant. tx
14:09:28 <alaski> garyk: ok
14:09:36 <alaski> #topic Kilo priorities
14:09:47 <alaski> A reminder that they can be found at https://etherpad.openstack.org/p/kilo-nova-priorities
14:10:01 <alaski> And open reviews for the priorites can be found at https://etherpad.openstack.org/p/kilo-nova-priorities-tracking
14:10:02 <jmulsow> #alaski: we could use some reviews on the server group remove spec. https://review.openstack.org/#/c/136487/
14:11:00 <alaski> Priority owners please look over the open reviews listed and try to have them updated for the spec review day tomorrow, and just updated in general
14:11:20 <alaski> jmulsow: ok.  You can bring it to open discussion or bring it up during the review day tomorrow
14:11:40 <bauzas> alaski: most of the owners are not there now, should we maybe put them an action item ?
14:11:57 <bauzas> damn time differences, everyone should be CET time
14:12:16 <alaski> bauzas: heh.  How about I send a response to the spec day email reminding people?
14:12:24 <bauzas> alaski: sounds a good plan
14:12:33 <garyk> alaski: mikal sent a mail to the list earlier
14:12:54 <garyk> but a friendly reminder would be great
14:13:01 <bauzas> sajeesh: you can't add a priority in the list, I'm sorry :(
14:13:06 <alaski> #action alaski send email reminding priority owners to update the list of reviews
14:13:21 <sajeesh> ok...I was not knowing ..sorry
14:13:38 <alaski> #topic Gate status
14:14:03 <alaski> Does anyone have updates for this?
14:14:16 <garyk> from the MS side things are up and running again (we had a few days of down time)
14:14:17 <gilliard> Gate's been OK although specs have needed a spurious rebase
14:14:23 * dansmith strolls in late
14:14:43 <alaski> gilliard: I just hit that on one of mine too
14:14:51 <bauzas> alaski: the categorization gate sounds quite good (>80%)
14:14:54 <gilliard> I've seen two which needed a post-approve rebase, yeah. So just keep an eye out, I suppose.
14:14:54 <alaski> dansmith: welcome
14:14:59 <bauzas> s/gate/ratio
14:15:20 <alaski> great
14:15:21 <bauzas> gilliard: there are some running bugs that can impact you
14:15:52 <alaski> sounds like things are generally ok right now then
14:16:06 <alaski> so we'll move on
14:16:12 <alaski> #topic Bugs
14:16:23 <kashyap> Hi
14:16:28 <alaski> Anything in particular we should look at?
14:16:46 <kashyap> This is about the live snapshot bug in Nova - https://bugs.launchpad.net/nova/+bug/1334398
14:16:49 <uvirtbot> Launchpad bug 1334398 in nova "libvirt live_snapshot periodically explodes on libvirt 1.2.2 in the gate" [High,Confirmed]
14:17:08 <kashyap> Quick context: This bug is rarely reproducible, only in the context of gate, if someone can get this reproduced in Gate somehow it'd be very useful.
14:17:11 <sdague> cburgess was going to take another spin at that
14:17:55 <kashyap> Great, and since I have the context, and test with that path enabled in my test env, if we can get a traceback of QEMU via gdb, I ca follow up with the right QEMU block layer developers.
14:18:24 <sdague> kashyap: sounds great
14:18:45 <kashyap> sdague, What time zone cburgess lives in?
14:19:00 <sdague> UTC-8
14:19:13 <alaski> great, it would be really nice to get further on that one
14:19:36 <kashyap> alaski, Yes, it's about time, just that it's not 'fun' to get to the end of it - but it ought to be done.
14:20:01 <dansmith> I'm in UTC-8 and *I* am up, what's cburgess' excuse?
14:20:02 <alaski> yeah, it's really hard to debug something you can't reproduce reliably
14:20:13 <sdague> alaski: ... like all of openstack :)
14:20:26 <alaski> sdague: heh, quite true
14:20:34 <kashyap> dansmith, If we can get a QEMU instance with gdb enabled, and reproduce that bug, then we'll have the root cause of that issue - most likely
14:20:45 <alaski> Any other bugs we should mention?
14:21:25 <garyk> one question on bugs?
14:21:34 <alaski> sure
14:21:37 <garyk> do we know how many we clsoe a week and how many are opened?
14:21:53 <garyk> that is, at this stage it seems people are working on new features, specs etc. should we bne more focussed on bugs?
14:22:29 <alaski> good question.  I don't have those numbers though
14:22:55 <n0ano> garyk there's a bug tracking page I intend to look into, with some history we can extract that info
14:23:10 <alaski> I think we could always be more focused on bugs, but keep in mind we're doing feature freeze earlier this cycle so we can spend more time on bugs
14:23:23 <n0ano> give me the stats we want and I can make sure they are available
14:23:27 <garyk> ah, that sounds good.
14:23:34 <gilliard> The Wednesday bug day seems to have a good effect http://status.openstack.org/bugday/
14:24:03 <alaski> #link http://status.openstack.org/bugday/
14:24:05 <sdague> gilliard: yeh, we should probably advertise it more.
14:24:13 <alaski> great page
14:24:19 <garyk> yikes, soon we are hitting 1K != 1024
14:24:23 <pawel-palucki> i am definitely on spending more time on bugs
14:24:29 <sdague> garyk: we were 1600 in Aug
14:24:30 <kashyap> sdague, Probably should be done on #openstack-nova channel itself,
14:24:39 <dansmith> it is
14:24:41 <kashyap> So others can see the 'context/theme of the day'
14:24:41 <alaski> kashyap: it is
14:24:44 <dansmith> we change the topic every wednesday
14:24:50 <kashyap> Okay, then probably I missed due to my tz :)
14:24:59 <alaski> dansmith: is the designated banner waver for that
14:25:01 <bauzas> I would suggest an email rather
14:25:01 <dansmith> we change it when it's wednesday in .au,
14:25:05 <dansmith> and then back just now
14:25:09 <dansmith> so I think everyone should have seen it
14:25:25 <bauzas> because apparently most of the people don't usually read chan topic
14:26:28 <sdague> this was an interesting one that came in this morning - https://bugs.launchpad.net/nova/+bug/1401437 - about how we call between services
14:26:29 <uvirtbot> Launchpad bug 1401437 in nova "nova passes incorrect authentication info to cinderclient" [High,Confirmed]
14:26:51 <alaski> #link https://bugs.launchpad.net/nova/+bug/1401437
14:27:44 <alaski> that doesn't look good
14:28:01 <sdague> yeh, it's part of the larger token scoping problems
14:28:17 <sdague> about when tokens run out, how do long running actions actually complete
14:28:19 <mriedem> oh fun i saw that one internally 2 weeks ago, at least he followed up with the LP report
14:28:39 <alaski> does this change with the new keystoneclient session work?
14:29:10 <sdague> I don't know
14:29:36 <alaski> okay
14:29:39 <bauzas> sdague: IIRC, Heat doesn't do this way, but rather takes impersonated tokens
14:29:48 <alaski> If anyone can take a look at that one and help move it along, please do
14:30:00 <bauzas> sdague: so there is a change in how they expire
14:30:08 <sdague> bauzas: gotcha
14:30:12 <garyk> i wonder if we need a common layer which manages all of the different clients - but that may be complicated due to the fact that each service has it own way of doing things
14:30:24 <bauzas> sdague: but that's really unclear in my head, still need to remember what I was looking one year ago
14:30:30 <sdague> garyk: that's sor t of the point of keystone sessions
14:30:44 <sdague> anyway, we can move on
14:30:46 <mriedem> there were also service tokens
14:31:01 <mriedem> or am i thinking trusts? too many keystone features i've never used.
14:31:08 <mriedem> bknudson would know
14:31:11 <alaski> okay, moving on
14:31:19 <bauzas> mriedem: trusts do the job, that's what I call impersonated tokens
14:31:20 <alaski> but please add to the bug review with assistance
14:31:28 <alaski> #topic Stuck reviews
14:31:36 <sajeesh> alaski: Can I ask help regarding my bp  https://review.openstack.org/#/c/129420. I am afraid that things are still in loop
14:31:37 <alaski> Pacemaker service group driver -- what level of CI do we require given that we have poor CI coverage of other drivers (such as zookeeper)?
14:31:49 <_gryf> yeah
14:32:05 <alaski> sajeesh: okay, we'll tackle that first
14:32:06 <_gryf> so there was mine BP about new driver for servicegroup
14:32:17 <sajeesh> ok,thanks
14:32:18 <alaski> _gryf: sorry, can we hold off one minute?
14:32:23 <_gryf> sure
14:32:41 <bauzas> alaski: oh
14:32:49 <bauzas> alaski: yeah, I remember that spec
14:32:57 <alaski> sajeesh: is there a particular concern that shoudl be brought up?
14:33:14 <bauzas> alaski: so the problem is that it requires a specific setup for Pacemaker
14:33:27 <sajeesh> nested projects has been already implemented in keystone
14:33:33 <raildo> alaski, I'm develop the Hierarchical Multitenancy implementation in Keystone for kilo and this sajeesh implementation about nested projects Quota management, will be very useful for Us
14:33:36 <bauzas> oh, missed the ping from alaski about sajeesh's BP
14:33:37 <raildo> :)
14:33:51 <sajeesh> ++1
14:34:12 <alaski> sajeesh: does this just need more visibility?  or is there something contentious that could use resolution?
14:34:40 <sajeesh> I think more visibility is requierd
14:34:47 <alaski> okay
14:35:00 <alaski> #link https://review.openstack.org/#/c/129420
14:35:07 <sajeesh> alaski,I am afraid that things are still in loop
14:35:09 <raildo> we have all patches about the HM base implementation merged to kilo-1, so if we can have something like Hierarchical Quotas to work with this implementation in Kilo, will be great
14:35:26 <alaski> reviewers please have a look
14:35:40 <alaski> sajeesh: you might want to bring this up in the Nova channel tomorrow during review day
14:35:52 <sajeesh> alaski:yes
14:35:54 <raildo> alaski, the other problem is about the bp don't have a milestone target. how we can work with this?
14:35:56 <raildo> #link https://blueprints.launchpad.net/nova/+spec/nested-quota-driver-api
14:36:11 <alaski> raildo: the spec needs to be approved before that can be set
14:36:20 <raildo> alaski, great :)
14:36:38 <alaski> _gryf: back to you
14:36:40 <sajeesh> alaski:what should I do from my part?
14:36:47 <_gryf> so. during reviewing my bp, in case of tests that was revealed that there is no tempest coverage for other drivers than db
14:37:04 <raildo> alaski, I'll contribute reviewing other nova spec and patches too. Thanks a lot!
14:37:05 <bauzas> _gryf: yeah indeed
14:37:09 <alaski> sajeesh: hopefully just respond to review comments, and bring it up tomorrow
14:37:15 <kaisers1> quick newbie question: a spec needs two +2 for acceptance, right?
14:37:18 <sdague> _gryf: so we don't actually need tempest tests here
14:37:19 <bauzas> _gryf: but that's not a good reason for adding a new one
14:37:22 <alaski> kaisers1: yes
14:37:24 <sajeesh> alaski:thanks a lot :-)
14:37:29 <kaisers1> alaski: thnx
14:37:45 <alaski> sajeesh: kaisers1 np
14:37:46 <_gryf> bauzas: hm
14:37:47 <sdague> for instance, zookeeper actually has unit tests in tree... but the configuration of zookeeper on the unit tests nodes is missing
14:37:59 <bauzas> _gryf: actually, I'm more concerned about what kind of setup would require this driver
14:37:59 <sdague> but that's a thing we can do
14:38:12 <_gryf> bauzas: so is it good reason for not adding tempest tests, or good reason to not adding another servicegroup driver?
14:38:39 <_gryf> bauzas: in case of my bp - you're right
14:38:40 <bauzas> _gryf: I'm saying we first need to understand what's behind the scene for the Pacemaker driver, rather than talking about CI now :)
14:39:14 <_gryf> bauzas: it may be difficult for creating reasonable tests for the pacemaker
14:39:22 <bauzas> _gryf: and depending on what it would be provided, Tempest tests could be necessary or not - but I don't know if that's really important
14:39:44 <bauzas> _gryf: I mean, that's not a new public API, neither a CLI command
14:39:48 <_gryf> bauzas: since more i'm digging in the pacemaker itself, more i'm convinced that this is bad idea
14:40:04 <_gryf> since the implementation will have lots of calls for pacemakers tools
14:40:07 <bauzas> _gryf: so we would rather do functional testing directly in Nova
14:40:32 <bauzas> _gryf: so I think the spec is not stuck - I just need more details :)
14:40:53 <bauzas> _gryf: I saw you commented back, I have to look again
14:41:02 <_gryf> bauzas: yeah
14:41:07 <alaski> it sounds like for now it might be okay to go with just in-tree tests
14:41:16 <sdague> can someone post the spec url for this? I seem to have missed it
14:41:31 <alaski> but it would be good to get zookeeper fully tested, and anything new like this should be fully tested as well
14:41:33 <bauzas> alaski: yeah, but as I said, that's not a problem of where to test
14:41:41 <_gryf> bauzas: but i what i understand what mikal says, was there is no other infra for testing other plugins aswell
14:41:51 <alaski> sdague: it's not on the agenda, so I don't have the link either...
14:41:54 <bauzas> alaski: the spec is requiring a specific Pacemaker setup, hence my wonders
14:42:12 <bauzas> lemme find it
14:42:29 <bauzas> https://review.openstack.org/#/c/139991/
14:42:33 <bauzas> #link https://review.openstack.org/#/c/139991/
14:43:06 <bauzas> anyway, I think it's really requiring more details, that's not really a stuck spec
14:43:08 <alaski> thanks
14:43:20 <alaski> yeah, it doesn't look stuck, but it could use some discussion
14:43:21 <_gryf> bauzas: ok, thanks.
14:43:32 <_gryf> alaski: indeed :)
14:44:18 <alaski> I think we can just comment on the review for now, and get to a ML post or bring it back here if necessary
14:44:30 <bauzas> +1
14:44:40 <alaski> next up
14:44:42 <alaski> Neutron API -- https://review.openstack.org/#/c/131413/
14:45:13 <alaski> to me this looks stuck waiting for the proposer
14:45:13 <gilliard> Yes, I added this. What do people think about this? It hasn't moved for a while but I get the sense that it's blocking bugfixes etc in that area
14:45:34 <gilliard> because everything's so hard to review
14:45:37 <garyk> alaski: yes, it looks like it is waiting for him.
14:45:50 <garyk> there are actually quite a lot of review in the neutron api file at the moment
14:45:56 <alaski> does someone want to reach out and see about taking this over?
14:45:59 <garyk> i am not sure if these are getting enough eyes.
14:46:19 <gilliard> eg https://review.openstack.org/#/c/135260/ eg https://review.openstack.org/#/c/126309/
14:46:19 <garyk> alaski: i would be happy to take this over, but would need to check with beagles
14:47:06 <gilliard> thanks garyk it would be really helpful to see this move.  I'm happy to help with the implementation.
14:47:22 <lxsli> same
14:47:25 <garyk> gilliard: i will reach out to him and see what his situation is.
14:47:33 <alaski> garyk: cool.  maybe just see if he's still working on this first, but it would be nice for someone to help move this forward
14:47:45 <garyk> alaski: sure, will do
14:47:52 <alaski> thanks
14:48:06 <sdague> also, realize, it's totally fair to take over someone's patch and rebase it for them if it's stuck in merge conflict
14:48:15 <mriedem> that's not the issue
14:48:19 <mriedem> beagles hasn't been replying
14:48:42 <sdague> that's the issue with - https://review.openstack.org/#/c/126309/ that gilliard posted
14:49:18 <sdague> sorry, was on the wrong piece of the thread :)
14:49:26 <gilliard> sdague: right - I meant that as just an example of a bugfix which is going slowly and would be easier to review if the work in the spec has been started.
14:49:34 <alaski> heh, it's a good reminder though
14:49:35 <sdague> gotcha
14:49:53 <alaski> alright, next up...
14:49:59 <alaski> Host health monitoring (https://review.openstack.org/#/c/137768/3) -- does Nova should provide information on the host condition, and how?
14:50:01 <_gryf> alaski: ok, sorry, i've putted it in wrong section. should be in open discussion.
14:50:02 <mriedem> gilliard: the spec isn't holding up neutron api reviews, the neutron api just holds itself up :)
14:50:05 <mriedem> hence the spec
14:50:09 <gilliard> :)
14:50:11 <alaski> _gryf: no worries
14:50:16 <_gryf> alaski: :)
14:50:20 <bauzas> _gryf: I missed that spec, I left a quick comment
14:50:32 <alaski> I'm not convinced that host health monitoring is stuck either
14:50:40 <bauzas> alaski: +1
14:50:49 <alaski> but it could use some more voices on the review
14:50:54 <bauzas> I actually have one stuck spec :)
14:50:54 <kaisers1> more newbie questions: 'stuck' is a review that has one or more -1 and hasn't changed over a longer time?
14:51:27 <bauzas> #link https://review.openstack.org/#/c/89893/ is having -2 because of a previous PS and needs -Code-Review
14:51:36 <mriedem> kaisers1: stuck could mean there is contention/gridlock and needs broader consensus
14:51:37 <alaski> stuck means it's not possible to make progress on it, and has been around for a while
14:51:40 <bauzas> but john is traveling these days
14:51:56 <kaisers1> ok, thnx again
14:52:08 <bauzas> so I'm a little worried of loosing benefits of the specs day if there is still a -2 against it, while it shouldn't :)
14:52:15 <_gryf> alaski: I think the title says it all. What do people think about putting such information in the nova?
14:52:16 <sdague> how long is john out? If we really need to we can get infra to reset a vote
14:52:26 <bauzas> alaski: so if you have magic power for removing this -2,it would be awesome
14:52:27 <alaski> bauzas: he should be back in tomorrow I believe
14:52:33 <alaski> bauzas: I do not :)
14:52:41 <alaski> sdague: he's traveling back home today
14:52:46 <sdague> bauzas: no, it's a direct database op to do that
14:52:47 <bauzas> internal email is considered as magic power
14:52:53 <bauzas> :)
14:53:03 <bauzas> sdague: yeah I know
14:53:15 <alaski> bauzas: I can ping him internally, and will do that
14:53:24 <bauzas> alaski: great thanks
14:53:41 <bauzas> alaski: again it's just a matter of not loosing the spec review day
14:53:48 <alaski> bauzas: definitely
14:53:55 <alaski> #open discussion
14:53:59 <bauzas> alaski: I really understand that John can be a busy guy
14:54:00 <alaski> ramineni1: you have something to bring up?
14:54:09 <alaski> #topic open discussion
14:54:12 <ramineni1> ya :) need more reviews on https://review.openstack.org/#/c/136104/ and https://review.openstack.org/#/c/133534/ . Have one +2 on both of these and code changes are small for both the specs.
14:54:12 <alaski> heh
14:54:15 <ramineni1> can these be targeted for kilo-1?
14:54:32 <garyk> ramineni1: asking for reviews moves you to the back of the queue :)
14:54:42 <alaski> ramineni1: they get a target after the spec is approved
14:54:42 <sdague> at last check oomichi has got all of tempest passing on V2.1
14:54:53 * gilliard backspaces over a review request
14:55:04 <sdague> we have a job which configures devstack with v2.1 on the v2 endpoint, and it's now working
14:55:08 <alaski> sdague: awesome!
14:55:14 <garyk> sdague: thats great!
14:55:17 <mriedem> sdague: is that on experimental queue?
14:55:21 <mriedem> or all patches?
14:55:22 <sdague> mriedem: yes
14:55:24 <mriedem> k
14:55:27 <sdague> it's in experimental
14:55:40 <garyk> kudos oomichi
14:55:54 <oomichi> yeah, and I hope it will be non-experimental soon.
14:56:12 <alaski> was just about to ask.  is there a plan for moving it out of experimental
14:56:22 <alaski> just the standard "let it bake"?
14:56:26 <bauzas> oomichi: could you maybe put some notes about microversions in devref ?
14:56:38 <bauzas> oomichi: just to be clear for reviewing
14:56:47 <sdague> well microversions infrastructure is still merging
14:57:05 <bauzas> sdague: ok, so 2.1 only ? gotcha
14:57:13 <bauzas> that's still a big deal tho eh :)
14:57:16 <oomichi> bauzas: sorry, I am not sure that. I'd like to talk about it later.
14:57:28 <bauzas> oomichi: sure, ping me at the end of the meeting
14:57:44 <oomichi> bauzas: i got it.
14:58:18 <alaski> anything else open?
14:58:24 <gilliard> mriedem brought up ssl config last week. I put a POC here https://review.openstack.org/#/c/139672/ any feedback welcome :)
14:59:10 <alaski> did you send an email to the ML with that?
14:59:19 <gilliard> I believe so, yes
14:59:23 <alaski> cool
14:59:44 <garyk> i am sorry but i need to run. have a good weekend
14:59:46 <mateuszb> Do you know if there is a plan to get rid of cells-scheduler? I am just wondering how the requirement of this bp could be met using current nova-scheduler: https://review.openstack.org/#/c/140031/
14:59:57 <ramineni1> alaski: the specs proposed are useful for ironic .. is there anything I could do , i'm not usre about the blocker on the specs
15:00:02 <mriedem> gilliard: yeah, thanks, i'm way behind on reviews, have some other stuff to get done first then i hope to grind on reviews
15:00:16 <gilliard> np
15:00:28 <alaski> ramineni1: we're going to be tackling a lot of spec reviews tomorrow, bring the specs to the nova channel tomorrow
15:00:39 <bauzas> mateuszb: hi, let's discuss that in #openstack-nova if you want
15:00:48 <alaski> I don't have the power of early marks, so we have to end on time
15:00:49 <ramineni1> alaski: sure, thanks, will do that :)
15:00:54 <alaski> #endmeeting