19:00:38 <devananda> #startmeeting ironic
19:00:39 <lucasagomes> o/
19:00:40 <openstack> Meeting started Mon Jul 14 19:00:38 2014 UTC and is due to finish in 60 minutes.  The chair is devananda. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:00:41 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:00:41 <yuriyz> щ.
19:00:43 <openstack> The meeting name has been set to 'ironic'
19:00:46 <devananda> hi all!
19:00:58 <NobodyCam> hi
19:01:00 <adam_g> o/
19:01:02 <devananda> as usual, our agenda is available up here:
19:01:03 <devananda> #link https://wiki.openstack.org/wiki/Meetings/Ironic
19:01:08 <devananda> #chair NobodyCam
19:01:09 <openstack> Current chairs: NobodyCam devananda
19:01:12 <dtantsur> o/
19:01:19 <rloo> hi
19:01:34 <devananda> #topic announcements
19:01:39 <Nisha> o/
19:01:40 <mrda> \o
19:02:03 <devananda> quick call out again for our mid cycle meetup coming in just two weeks!
19:02:35 <devananda> I'd like to organize a small pre-meetup pizza outing on Sunday - sign up on the etherpad if you're interested
19:02:37 <NobodyCam> #link https://etherpad.openstack.org/p/juno-ironic-sprint
19:02:41 <GheRivero> o/
19:02:55 <lucasagomes> hah nice
19:02:57 <matty_dubs> My flight gets in super-late so I won't be able to make it for pizza :'(
19:03:10 <devananda> we should start thinking about what our goals for the midcycle are -- not in this meeting, but on the etherpad above :)
19:03:37 <devananda> looks like there are some up there already, in fact
19:03:37 <NobodyCam> I will not be able to make the next meeting due to TripleO mid cycle
19:03:54 <devananda> yep - next week is the tripleo mid cycle in Raleigh
19:04:04 <devananda> I know some of us (myself included) will be there
19:04:18 <romcheg_ltp> So who will be the chair?
19:04:20 * NobodyCam is htere monday and tuesday only
19:04:47 * jroll votes lucasagomes for chair next week
19:04:54 <romcheg_ltp> +1 for lucasagomes!
19:04:57 <yuriyz> +1
19:04:59 <Shrews> NobodyCam: coming to the tripleo meetup? excellent
19:05:03 <mrda> +1
19:05:05 <lucasagomes> hah
19:05:05 <devananda> lucasagomes: want to? I'm fine with that
19:05:06 <lucasagomes> alright
19:05:08 <NobodyCam> :)
19:05:08 <dtantsur> +1
19:05:19 <jroll> nice looking bus I have, eh lucasagomes ? :)
19:05:20 <lucasagomes> sure
19:05:32 <devananda> #info lucasagomes will run the next weekly ironic meeting, since devananda and NobodyCam will be at the TripleO midcycle
19:05:57 <devananda> also, the followign week being our meetup, I would like to suggest taht we don't haev an IRC meeting taht monday
19:06:10 <jroll> +1
19:06:11 <NobodyCam> devananda: +1
19:06:19 <lucasagomes> yeah makes sense
19:06:21 <mrda> yup
19:06:34 <devananda> #info no weekly meeting on July 28, due to Ironic midcycle meetup
19:06:51 <romcheg_ltp> But will you post updates so the guys who don't go to the meetup can read that?
19:07:03 <Shrews> devananda: since adam_g and myself will be in raleigh too, i suggest talking testing strategy while there at some point
19:07:10 <devananda> romcheg_ltp: yep
19:07:18 <devananda> Shrews: good point.
19:07:55 <devananda> other announcements -- I reviewed our review stats on Friday and sent a few nominations out
19:08:18 <devananda> I'll make those official later today
19:08:30 <lucasagomes> o/
19:08:46 <devananda> a huge thanks to everyone who's been helping with reviews (and spec reviews)!
19:09:07 <devananda> our pace has gotten about 50% faster, in terms of # of patches landed, in June compared to May
19:09:16 <devananda> and I think it'll keep improving with a few mroe cores :)
19:09:23 <jroll> nice!
19:09:38 <devananda> that's all for my announcements, I think
19:09:39 <wanyen> Deva:  we uploaded the new ilo deploy spec, can you and otehr reviewers take a look?
19:10:14 <devananda> wanyen: please wait until the open discussion to bring up questiosn which aren't on the agenda
19:10:21 <devananda> #topic release cycle progress report
19:10:24 <wanyen> ok
19:10:40 <devananda> I noticed this morning taht three more specs were approved recently -- good job!
19:10:56 <devananda> again, a huge thanks to those workign on spec revies, and to the proposers for having patience
19:11:10 <devananda> we (and a lot of other projects, too) are still finding what the "right" balance is
19:11:25 <devananda> between beign too process heavy vs. not thinking things through clearly
19:11:29 <jroll> /b 76
19:11:35 <jroll> gah
19:11:47 <NobodyCam> BINGO
19:11:50 <NobodyCam> :-p
19:11:53 <jroll> lol
19:11:57 <matty_dubs> Close some of those tabs! ;)
19:12:08 <devananda> also a reminder, we're past the new-spec cut off -- but we can (and should) continue to iterate on existing specs until August 13
19:12:40 <devananda> the nova driver is making progress. we landed several more clean up patches last week
19:12:53 <devananda> mrda has one in flight still to add auth token caching to the client inside the nova driver
19:13:01 <NobodyCam> so no new specs for J at this point K is open?
19:13:02 <devananda> which should help with some of the performance
19:13:14 <devananda> NobodyCam: correct. no NEW specs. but revisions to existing specs are fine
19:13:21 <NobodyCam> :)
19:13:29 <mrda> devananda: hoping to get an updated patch uploaded today for further review
19:13:35 <devananda> mrda: fantastic
19:13:45 <rloo> devananda: if someone proposes a new spec, do you -2 it?
19:13:47 <devananda> I'm going to try to squash all the ones that landed into my proposal to nova
19:13:52 <devananda> rloo: yep
19:14:13 <devananda> and take the feedback i've gotten from nova on taht review and push it back to ironic
19:14:16 <rloo> thx devananda. you = you ;)
19:14:28 <NobodyCam> rloo: with a nice note that we'll unblock it after k officaily opens
19:14:35 <devananda> FWIW, this is painful. I would really like us to stop making any more changes to the nova virt driver
19:14:52 <lucasagomes> we can talk about it on the open discussion, but what if the specs are submitted to the k/ directory instead of juno/?
19:15:01 <devananda> mrda: so the sooner you can work on that, and we can land it, the better -- I think it's important enough to land it today if we can
19:15:06 <rloo> devananda: if we don't make the changes to the nova virt driver now, then when? would it be less painful later?
19:15:09 <wanyen> deva: Did faizan workwith you on teh Nova driver changes for URFI boot?
19:15:30 <wanyen> s/URFI/UEFI
19:15:50 <devananda> wanyen: the spec for those is not approved yet, in either nova or ironic. I suspect taht work will have to wait until after we land the nova virt driver (assuming we are able to land it this cycle)
19:16:08 <lucasagomes> rloo, after it's merged in nova
19:16:15 <jroll> devananda: perhaps nova driver changes should be submitted to nova with your patch as a dep
19:16:35 <lucasagomes> rloo, not less painful, but keep updating the patch all the time adding new features makes the review in nova more difficult
19:16:38 <devananda> jroll: except we can't test them like that yet
19:16:51 <devananda> until the driver merges into nova, we can't flip the tempest switch to test it there
19:17:07 <devananda> so we can't test dependent patches right now if they're proposed on top of mine
19:17:11 <rloo> lucasagomes: yeah, I hear you. Pain is in our future...
19:17:16 <jroll> devananda: sure, but we can't land them like that either...
19:17:34 <jroll> I'm just thinking it's a better place to put them, and they'll land eventually
19:17:37 <devananda> jroll: we can test and land them in Ironic for now. it jus tmeans I need to do a lot of manual merging and fixing of things
19:17:52 <jroll> "I would really like us to stop making any more changes to the nova virt driver"
19:18:07 <jroll> bugfixes should be in the ironic tree, yes, but adding features probably not
19:18:09 <devananda> which is why I'm asking that everyone hold off approving any changes to the nova virt driver in our tree
19:18:49 <jroll> right... which doesn't jive with "test and land in ironic for now"
19:18:52 <devananda> let's move on, we can talk about the details of that process for a while later :)
19:18:55 <jroll> we can discuss later
19:18:57 <jroll> :)
19:18:59 <NobodyCam> devananda: I have +2 some changes but I am not +a'ing any in the nova tree
19:19:07 <devananda> lucasagomes: hi! any updates on the general cleanup?
19:19:09 <devananda> comstud: ^ ?
19:19:30 <devananda> NobodyCam: thanks. that's perfect.
19:19:56 <comstud> devananda: I haven't been doing any cleanup...
19:19:56 <lucasagomes> well not on the clean ups, pretty much all the instance_info stuff is now merged, but the migration script... I will update the patch of the migration to include some documentation about how to use and what is that migration script about
19:20:21 <comstud> but, I have a that one bug fix for build + delete races (when you happen to configure 2 nova-computes the way nova does not like :)
19:20:32 <comstud> I need to update it with an issue I found with my fix tho
19:20:43 <devananda> comstud: thought you had some more object code cleanups in the pipe?
19:20:48 <lucasagomes> not a clean up... but a refactor-like patch. There's the management interface, I put a new spec up today its already +2
19:21:09 <comstud> devananda: Gosh, well, I wish.. but they are not for the virt driver, anyway..
19:21:11 <lucasagomes> if I could get more eyes on that would be great because there's other specs depending on that one
19:21:24 <devananda> lucasagomes: got a link?
19:21:24 <lucasagomes> #link https://review.openstack.org/#/c/100218/
19:21:26 <devananda> :)
19:21:29 <NobodyCam> lucasagomes: link
19:21:31 <NobodyCam> :--p
19:21:31 <lucasagomes> heh perfect timing
19:21:36 <lucasagomes> was grabbing it
19:21:40 <NobodyCam> heheh
19:21:47 <devananda> #info more reviews requested for the change to ManagementInterface (see prior link)
19:22:00 <devananda> lucasagomes: anything else before we move on?
19:22:05 <lucasagomes> not from me
19:22:11 <devananda> #topic subteam status reports
19:22:36 <devananda> adam_g: hi! care to summarize the tempest work from last week?
19:23:28 <adam_g> nothing too major from the tempest side of things. Shrews and i have been busy trying to push through some new feature flags, the devstack slaves RAM requirements should get bumped to 512 soon paving the way for concurrent testing + parallel ironic instance spawns
19:23:28 <Shrews> devananda: Not sure adam_g is available, but I did make some progress with submitting some tempest reviews to fix some more API tests for us (some after lengthy discussions).
19:24:10 <devananda> good stuff
19:25:00 <devananda> for those not following the CI stuff closely, the short version is that the -qa team wants to enable all the compute/api tests with the ironic driver (instead of with libvirt)
19:25:38 <adam_g> Shrews, was consensus reached regarding what to do about cinder tests?
19:25:39 <devananda> rather than us having just one or two scenario tests for ironic, that would mean running all the nova tests (on the order of 100 or some such)
19:26:00 <Shrews> adam_g: we're going with a feature flag
19:26:08 <adam_g> Shrews, ah, cool
19:26:20 <Shrews> adam_g: https://review.openstack.org/106472
19:26:32 <NobodyCam> #link https://review.openstack.org/106472
19:26:35 <devananda> NobodyCam: any updates on tripleo testing of Ironic?
19:27:29 <NobodyCam> not from me today, maybe lifeless has one?
19:27:34 <NobodyCam> if he's around
19:27:57 <devananda> NobodyCam: ok. it looks like the tripleo-undercloud-ironic job has been mostly stable lately, so perhaps no news is good news
19:28:00 <devananda> dtantsur: hi! I know you just got back from PTO (welcome back!) -- anything to share regaring bug status today? If not, that's fine.
19:28:10 <dtantsur> something :)
19:28:16 <dtantsur> and hello everyone
19:28:22 <devananda> dtantsur: or regarding fedora support, if anything's changed there
19:28:22 <NobodyCam> ya :)
19:28:29 <dtantsur> people like numbers, here are the numbers: 11 New bugs (+9); 151 Open bugs (+28); 35 In-progress bugs (-1); 1 Critical bug (0); 26 High importance bugs (0); 5 Incomplete bugs (-1)
19:28:54 <devananda> ouch, 151 open bugs... I think it was just barely over 100 a few weeks ago
19:28:55 <dtantsur> As you see, we have seriously increased number of opened bugs (and I didn;t have time to sort out new ones)
19:29:07 <dtantsur> 123 2 weeks ago
19:29:22 <dtantsur> (+- numbers are differences since last measurement)
19:29:32 <devananda> ack
19:29:42 <dtantsur> One of the reasons is that Nova folks start retargeting bm bugs to us
19:29:54 <dtantsur> some without even seriously looking into, some quite valid
19:29:55 <devananda> #info Bug stats: 11 New bugs (+9); 151 Open bugs (+28); 35 In-progress bugs (-1); 1 Critical bug (0); 26 High importance bugs (0); 5 Incomplete bugs (-1)
19:30:05 <dtantsur> so we can expect even more bugs from them
19:30:21 <devananda> dtantsur: yep. jogo poked me after the fact on that, but I haven't had time to triage all of them.
19:30:23 <dtantsur> bugs for nova-bm are being closed
19:30:34 <NobodyCam> dtantsur: can we close some of the bugs,
19:30:44 <dtantsur> devananda, I'll ask you to triage some, because not everything is clear to me
19:30:47 <NobodyCam> the one critical has a fix committed
19:30:49 <devananda> some of them may be valid, but they should all have the "baremetal" tag
19:30:53 <devananda> so at least they are easy to spot
19:31:04 <NobodyCam> as I suspect many others do too
19:31:17 <dtantsur> NobodyCam, I didn't have time to create a script, measuring proper numbers, so this is still what LP reports
19:31:30 <NobodyCam> ahh :) Np
19:31:48 <dtantsur> this is it for bugs, I'm in the middle of sorting them
19:31:53 <NobodyCam> what do we need to do to remove them frm LP?
19:32:02 <devananda> NobodyCam: nothing
19:32:02 <dtantsur> and no updates on Fedora - didn't have time
19:32:17 <devananda> NobodyCam: they will be hidden from the default status view on LP once they are released
19:32:26 <NobodyCam> ack :)
19:32:44 <dtantsur> ah! https://bugs.launchpad.net/launchpad/+bug/1341643
19:32:45 <devananda> I'd prefer our weekly stats to indicate those bugs taht don't even have a fix committed yet -- but LP doesn't do that for us, so we will need a custom script at some point
19:32:50 <uvirtbot> Launchpad bug 1341643 in launchpad "No way of closing some bugs" [Undecided,New]
19:32:52 <dtantsur> that is about non-closing bugs in LP :)
19:33:05 <devananda> that's something else too ...
19:33:20 <dtantsur> devananda, that's on my plans, didn't have time though
19:33:24 <devananda> jroll: hi! any updates on IPA?
19:33:30 <jroll> hi!
19:33:39 <devananda> dtantsur: thanks for the updates :) again, welcome back!
19:33:43 <jroll> as always, people can check the latest status here: https://etherpad.openstack.org/p/ipa-todos
19:33:43 <dtantsur> thanks :)
19:33:52 <devananda> #link https://etherpad.openstack.org/p/ipa-todos
19:34:06 <jroll> I'm still figuring out some devstack things... agent is booting and getting the deploy command, but can't talk to glance
19:34:11 <jroll> s/glance/swift/
19:34:24 <jroll> I have some ideas... patches and testing etherpad soon
19:34:41 * NobodyCam has to step away from computer... I will catch up / read the log when I return.. Thank you all...
19:34:42 <jroll> JoshNang has been working on / testing decom... that's very close as well
19:34:54 <dtantsur> jroll, do you have some instructions on testing IPA locally?
19:34:59 <dtantsur> maybe I missed them...
19:35:01 <jroll> and don't forget we still need the spec to land: https://review.openstack.org/#/c/98506/
19:35:08 <JayF> jroll: should we mention the builds of ipa-coreos image?
19:35:16 <jroll> dtantsur: not yet, coming soon. this is what I meant by "testing etherpad"
19:35:23 <dtantsur> very good
19:35:24 <jroll> JayF: yes, thanks :)
19:35:28 <JayF> I will talk about it then :)
19:35:43 <JayF> We have a job running, in openstack infra, that's generating this: http://tarballs.openstack.org/ironic-python-agent/coreos/ipa-coreos.tar.gz every agent commit
19:36:02 <JayF> this is a tarball that includes the ramdisk and kernel neccessary to run IPA in the CoreOS ramdisk, just like we do at Rackspace
19:36:07 <wanyen> is IPA going to support OS othere than CoreOS?
19:36:23 <JayF> We welcome patches to add support for other distribution methods, including DIB :)
19:36:41 <dtantsur> #link http://tarballs.openstack.org/ironic-python-agent/coreos/ipa-coreos.tar.gz
19:36:45 <yuriyz> I started some work https://review.openstack.org/#/c/103105/
19:36:47 <jroll> IPA will run on any OS, fwiw, we just don't have builders for others
19:37:15 <jroll> yuriyz: awesome! should ping us for reviews when that's no longer WIP :)
19:37:17 <devananda> #info ironic-python-agent now generating tarballs of a CoreOS-based agent image on every commit, available at http://tarballs.openstack.org/ironic-python-agent/coreos/ipa-coreos.tar.gz
19:37:40 <dtantsur> yuriyz, cool, I'd also be glad to review
19:37:44 <devananda> yuriyz: nice!
19:38:17 <JayF> yuriyz: I'd also suggest you add a builder for it to -infra once we get it merged as well. If it's going to be supported, it needs to be built and tested regularly to make sure it doesn't atrophy :)
19:38:38 <JayF> and while we still need better testing, we can at least build and upload working images :)
19:38:42 <devananda> jroll: keep up the good work. I'm looking forward to digging into IPA at the sprint in two weeks :)
19:39:06 <jroll> devananda: thanks, we're excited to land it
19:39:13 <jroll> but seriously need spec reviews
19:39:20 <jroll> so that the spec lands before the sprint
19:39:55 <devananda> #info IPA spec needs reviews -- https://review.openstack.org/#/c/98506/
19:40:01 * dtantsur will look
19:40:06 <jroll> woot :)
19:40:07 <lucasagomes> +1
19:40:11 <devananda> GheRivero: hi! any updates on Oslo for us?
19:41:38 <devananda> ok, we'll come back to Ghe if he is around later.
19:41:42 <devananda> romcheg_ltp: hi! any updates on the nova db -> ironic db migration script?
19:42:04 <romcheg_ltp> Morning again!
19:42:41 <romcheg_ltp> So I made a few minor updates to the data migration tool
19:43:05 <romcheg_ltp> And was sticking to the Grenade testing
19:43:42 <devananda> it looks like there are reviews up for them, but they haven't gotten any nova attention yet
19:43:46 <romcheg_ltp> I don't know how that d**n think works on gates but in a week I didn't manage that to run
19:44:04 <romcheg_ltp> Who is the best guys to talk about grenade?
19:44:17 <devananda> romcheg_ltp: so at the atlanta summit, sdague volunteered to give some guidance on how we can use grenade to test this upgrade path
19:44:36 <romcheg_ltp> So I will poke Sean then
19:44:42 <devananda> thanks!
19:44:43 <romcheg_ltp> Also I have a question
19:45:02 <devananda> I'll poke the nova team regarding the upgrade spec and your two reviews for a migration tool in the meantime, and at the midcycle in two weeks
19:45:08 <romcheg_ltp> Should I create console endpoints for migration tools in nova?
19:45:39 <devananda> I dont have confidence that console support worked in nova-baremetal
19:45:45 <romcheg_ltp> I can also post that to the mailing list but there's no need for that if you know the quick answer
19:45:50 <devananda> it certainly wasn't tested, and none of the deployments I know of used it
19:46:04 <romcheg_ltp> devananda: No, not that console :)
19:46:07 <devananda> I think posting to the ML would be good, just in case someone is out there who needs it
19:46:10 <devananda> oh?
19:46:20 <devananda> which console then
19:46:21 <romcheg_ltp> I mean cli endpoints
19:46:41 <devananda> what's a CLI endpoint?
19:47:13 <romcheg_ltp> console_script in setup.py
19:47:18 <devananda> oh - you mean install them into $PATH somewhere
19:47:19 <devananda> yes
19:47:21 <romcheg_ltp> That's what I mean
19:47:34 <devananda> definitely :)
19:48:06 <devananda> romcheg_ltp: thanks!
19:48:22 <devananda> thanks to all the subteams - great reports, folks :)
19:48:22 <romcheg_ltp> I have no questions then :)
19:48:24 <romcheg_ltp> Thanlks
19:48:43 <devananda> #topic discussion: how to clearly indicate that Ironic is not a CMDB
19:48:49 <devananda> so this came up (again) today
19:48:57 <devananda> I feel strongly that Ironic is not suited to be a CMDB
19:49:03 <JayF> +1
19:49:10 <devananda> and we're dancing close to a slippery slope
19:49:19 <devananda> and some of the spec proposals go too far, IMO, over that edge
19:49:33 <JayF> What are a few examples?
19:49:39 <devananda> so I pointed out the mission statement
19:49:41 <devananda> #link http://git.openstack.org/cgit/openstack/governance/tree/reference/programs.yaml#n142
19:50:07 <devananda> and am wondering if others feel that is good, or if that is somehow indicating that a CMDB is or might be in scope ...
19:50:24 <Shrews> devananda: it does invite some vagueness
19:50:39 <dtantsur> as I already stated, "managing" is too vague
19:50:47 <matty_dubs> To be honest (and maybe this is just me), I'm not sure I have ever seen that mission statement before.
19:51:03 <dtantsur> success story ^^^
19:51:10 <devananda> matty_dubs: it's part of the governance docs which the TC produces
19:51:34 <devananda> but that ya'll haven't seen them indicates also that the impression that Ironic might be a CMDB isn't coming from there
19:51:42 <devananda> so just changing that won't address it (but might help)
19:51:48 <jroll> right, I don't think it's coming from there
19:51:52 <JayF> devananda: I'm very curious as to hear your impression as to what is over the line or not?
19:52:05 <jroll> but if you change it, you can at least point at that when needed
19:52:09 <matty_dubs> It may be the canonically correct place to clarify this, but I'm not sure if it'll be widely read
19:52:14 <devananda> JayF: storing a full inventory of discovered hardware information
19:52:40 <jroll> devananda: btw, I think the mission statement should be here: https://wiki.openstack.org/wiki/Ironic
19:52:44 <dtantsur> oh, we had this dicsussion today again
19:52:45 <devananda> JayF: in and of itself isn't over the line. but as soon as that's there, someone will want to search through it, or audit it, or ask for a delta -- what changed when I re-inventoried this server?
19:52:50 <jroll> (and perhaps a thing about 'not a cmdb')
19:52:51 <lucasagomes> jroll, +1
19:52:53 <devananda> and *that* is all CMDB territory
19:52:55 <Shrews> devananda: perhaps adding an "Ironic Manifesto" to the ironic wiki page would help?
19:52:59 <lucasagomes> it's a wiki so we can update that
19:53:00 <dtantsur> devananda, someone already wanted in comments to my spec >_<
19:53:01 <Shrews> oh, yeah, what jroll said   :)
19:53:14 <JayF> devananda: that's sorta what I was thinking the line was too: storing what exists (ok), modifying or updating what exists (not ok)
19:53:25 <dtantsur> yeah, if we want to avoid it, we may just be clear and say it
19:53:36 <JayF> devananda: the path for #2 is 'old server gone, here's a new server that looks suspiciously similar except now has new ram'
19:54:10 <jroll> JayF: I think updating is ok... but using it as a data source is what makes it a cmdb
19:54:10 <devananda> and of course, if ya'll think I"m wrong (and Ironic should be a CMDB) I expect everyone to tell me that
19:54:13 <devananda> (but so far no one has)
19:54:22 <wanyen> hw inventory related to scheduling would be relevant to proviisoning
19:54:24 <jroll> devananda: they're all scared of you :)
19:54:31 <devananda> jroll: :-/
19:54:37 <jroll> (I'm joking)
19:54:52 <jroll> wanyen: there are other methods of doing that
19:55:06 <devananda> jroll: i swear, i'm not scary ..... :)
19:55:07 <matty_dubs> To me, the line is kind of blurry. I have no issue with where it's drawn, but I think it's very easy to come along and say, "As long as we're doing x, it'd be nice to do x++...."
19:55:16 <wanyen> Jroll: for example?
19:55:28 <devananda> matty_dubs: exactly the situation we're in.
19:55:36 <jroll> wanyen: example: https://review.openstack.org/#/c/105802/
19:55:41 <dtantsur> 5 minutes left
19:55:53 <JayF> Doesn't that just mean that spec reviewers need a good idea of where the line is and to enforce it there?
19:55:58 <wanyen> Not to store all hw inventory but only teh hw properties taht assist scheduling
19:56:06 <devananda> i sort of had this discussion with folks offline about a year ago, but it was hypothetical. now, it's practical.
19:56:26 <devananda> and I think I'm doing a disservice to all of you if I don't try to help clarify where the line is
19:56:31 <jroll> wanyen: scheduler doesn't need to know that "this machine has 512kb L2 cache and an nvidia m1234 gpu with 2048MB video ram", that's far too fine-grained
19:56:36 <JayF> devananda: maybe something good to discuss at the mid-cycle is things explicitly in and out of scope
19:56:40 <JayF> devananda: to put into the wiki to clear it up
19:56:44 <devananda> JayF: ++
19:56:48 <jroll> +1
19:56:52 <dtantsur> ++
19:56:56 * devananda adds to agenda
19:57:00 <lucasagomes> +1 sounds good
19:57:07 <dtantsur> Can we have a quick chat on discovery specs?
19:57:15 <devananda> sure
19:57:18 <devananda> #topic open discussion
19:57:20 <jroll> type fast :)
19:57:28 <wanyen> jroll: I agree.  So only the proerpties taht assist scheduling
19:57:32 <jroll> Nisha: around?
19:57:39 <jroll> wanyen: yep :)
19:57:40 <dtantsur> we had a long discussion with Nisha and I guess we came to sort of agreement
19:58:15 <dtantsur> we don't see our specs as overlapping too much, Nisha's spec is one particular implementation that can be built on top of what I'm suggestings
19:58:27 <dtantsur> we've discussed ways of doing it, I left comment on the spec
19:58:50 <dtantsur> so I don't see reasons not to consider both specs
19:58:57 <wanyen> Jroll: we need toallow shceudling for with basic properties: #of CPus, Ram sz,  etc.   Some vendor maight want to support schedulling based on server model taht should be allowed.
19:58:58 <devananda> i think this isn't the first time we've seen a need
19:59:08 <devananda> for a generic spec + a specific implementation spec on top of it
19:59:18 <devananda> so I'm fine with that
19:59:19 <dtantsur> sure :)
19:59:20 <devananda> if it helps the discussion
19:59:45 <Nisha> jroll: yes
19:59:46 <devananda> if both authors agree, and gerrit represents taht dependency, i think that's great
19:59:48 <dtantsur> And I would actually ask cores to consider my spec, because it's around for some time, people come and ask for some new details, for proper hw inventory and so on
19:59:49 <jroll> dtantsur: indeed, I think Nisha's should depend on yours :)
19:59:59 <jroll> wanyen: indeed
20:00:03 <dtantsur> Nisha, are you ok with what I suggest?
20:00:15 <devananda> any objections to using a generic + specific spec approach in this case, or in general?
20:00:49 <Shrews> time is up, folks
20:00:50 <devananda> (waiting for Nisha's answer before ending the meeting)
20:00:58 <dtantsur> links: mine https://review.openstack.org/#/c/102565/, Nisha's https://review.openstack.org/#/c/100951
20:01:01 <Nisha> dtantsur: Yes it looks ok
20:01:07 <devananda> #link https://review.openstack.org/#/c/102565/
20:01:10 <devananda> #link https://review.openstack.org/#/c/100951
20:01:14 <Nisha> I will update
20:01:21 <dtantsur> thanks a lot :)
20:01:27 <wanyen> so nisah spec is pretty geenric in terms of management interface
20:02:09 <devananda> #agreed using a generic spec + one or more implementation-specific specs, which are gerrit dependencies, seems to be an accepted approach when that helps facilitate a discussion about new driver features
20:02:16 <devananda> thanks all! great meeting, as usual
20:02:21 <dtantsur> thanks!
20:02:24 <devananda> see some of you next week, and some the week after that!
20:02:27 <lucasagomes> cheers
20:02:30 <devananda> #endmeeting