19:02:01 <devananda> #startmeeting ironic
19:02:02 <openstack> Meeting started Mon May 20 19:02:01 2013 UTC.  The chair is devananda. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:02:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:02:05 <openstack> The meeting name has been set to 'ironic'
19:02:36 <devananda> firstly, for anyone not used to these sort of meetings
19:02:40 <devananda> there's an agenda :)
19:02:45 <devananda> #link https://wiki.openstack.org/wiki/Meetings/Ironic
19:03:07 <devananda> #topic API specifications
19:03:22 <devananda> so, the background here is
19:03:35 <devananda> a few people from different companies have said "hey, how can we interoprate with Ironic"
19:03:45 <devananda> and I said, "we dont have a well-defined API *yet*"
19:03:59 <devananda> and I think we should probably iron that out soon
19:04:16 <devananda> anyone have strong opinions on that?
19:04:21 <devananda> or on what the API should look like?
19:04:24 <epim> Woo, meeting time. :)
19:04:33 <anteaya> I agree we should have an api
19:04:52 <epim> and it should be versioned!
19:04:55 <GheRivero> API is a must do
19:05:00 <sdague> o/ <- here
19:05:02 <anteaya> regarding how it looks, what kind of expectations do the different companies have?
19:05:07 <devananda> well, yes, we must have an API :)
19:05:36 <sdague> devananda: so was there a natural bit inside the baremetal driver that would make sense to start to expose as rest resources/
19:05:36 <linggao> I am plalying with nova-baremetal from greezly. So one thing I would like to see in ironic api is that it can handle mutiple nodes with one command.
19:05:37 <sdague> ?
19:05:37 <devananda> and yes, it should probably be versioned
19:05:58 <devananda> sdague: there was an extension to Nova's API
19:06:08 <linggao> like boot node1,node2...
19:06:14 <devananda> which exposed create/show/delete for nodes and interfaces
19:06:16 <sdague> devananda: so I'd start with that, just as a place to grow from
19:06:26 <devananda> sdague: that was my initial thinking
19:06:32 <devananda> oh! i should probably say
19:06:37 <devananda> #link https://raw.github.com/openstack/ironic/master/ironic/doc/api/v1.rst
19:06:47 <devananda> that is a sketch of a possible API
19:07:00 <devananda> not at all set in stone. just a straw-man for discussion
19:07:18 <anteaya> looks like a reasonable starting point
19:07:43 <NobodyCam> doh
19:07:46 <sdague> my suggestions,it should be
19:07:47 <NobodyCam> sorry late
19:08:07 <sdague> /node/<id>/ifaces/ instead of /node/ifaces/<id>
19:08:34 <sdague> actually, the same for all those deeper resources
19:08:35 <GheRivero> and the same for image, power...
19:08:40 <sdague> yep
19:08:42 <epim> devananda: good start on that sketch, I think we should have the ability to mark a node as unavailable. If hardware issues are detected you can mark it as unavailable to scheduling until it's repaired
19:08:57 <devananda> sdague: thanks
19:09:04 <sdague> devananda: you want suggestions as patch reviews?
19:09:12 <sdague> that provides voting etc
19:09:18 <devananda> sdague: that'd be great :)
19:09:57 <devananda> epim: 'unavailable 'seems like a state or flag on a node, not a separate API call
19:10:02 <sdague> ok, you can action me to provide some proposed suggestions
19:10:12 <devananda> epim: eg, PUT /node/<id> {'state': 'unavailable'} or such
19:10:33 <devananda> #action sdague to provide API changes in a review
19:10:35 <sdague> I would also try to take a page out of the book for the nova v3 rewrite on how they are doing extensions, that would give you flexibility to add between major revs
19:10:57 <linggao> all the commands for node should be able to handle more than one node, or a group of node. this is very important for a large clound.
19:11:06 <devananda> sdague: ah, i haven't been following the v3 rewrite closely enough
19:11:17 <devananda> though i have been thinking about the versioned internal objects work that Nova is doing
19:11:26 <devananda> and would like to have the groundwork for something like that in place, too
19:11:29 <epim> Yup, i'd go so far as PUT /node/<id> {'state': 'unavailable', 'reason' : 'bad memory, repair order XYZ123 filed'}
19:12:10 <devananda> epim: so that's another part of the API which isn't drafted yet -- what the JSON structures are, what fields are required, etc
19:12:12 <sdague> devananda: also, would you mind moving doc to the root layer?
19:12:23 <sdague> most other projects do that, and it's just a little easier to find
19:12:29 <devananda> #action devananda to move ironic/doc/ to doc/
19:12:33 <sdague> I can propose that as a review
19:12:56 <devananda> sure. either one of us can :)
19:13:07 <devananda> linggao: so actually, i disagree about node ranges
19:13:17 <devananda> linggao: Ironic is an API for the hardware control
19:13:19 <epim> that makes sense, description fields like that should be optional
19:13:36 <devananda> linggao: scheduling and orchestration is happening at a higher layer, ie in Nova or Heat
19:13:43 <sdague> I'd start with a resource api, and add a bulk api later
19:13:47 <epim> there should be a whole meta field about a given host, actually. memory, cpu, disk, mfr, serial no, etc
19:13:55 <linggao> ah, I see.
19:14:36 <devananda> epim: yes. initially, that field has to match what Nova expects -- # CPUs, RAM, local storage, and CPU arch
19:15:43 <devananda> epim: other data which is not useful to Nova (not yet, anyway) eg. serial #, asset tag, PCI tree, etc, could be stored in a JSON-formatted TEXT field and exposed
19:16:47 <devananda> #action devananda to look at nova's v3 api rewrite and see if we can learn from that
19:17:12 <devananda> any thing else on APIs?
19:17:36 <epim> i'll submit my ideas as a patch
19:17:45 <devananda> sounds good :)
19:17:50 <devananda> moving on...
19:17:54 <devananda> #topic blueprints
19:18:00 <devananda> #link https://blueprints.launchpad.net/ironic
19:18:08 <anteaya> there are a few
19:18:17 <devananda> and i expect we'll end up with more
19:18:21 <anteaya> good
19:18:30 <devananda> so anyone's welcome to post blueprints
19:18:41 <devananda> i'm prioritizing based on matching the functionality in the nova baremetal driver
19:18:43 <NobodyCam> i've started to look at the VPD b[
19:18:50 <NobodyCam> bp
19:19:17 <devananda> some of these, like the manager BP, may get subdivided
19:20:02 <devananda> i'd like to see all the high & essential stuff assigned at some point :)
19:20:13 <devananda> any takers?
19:21:12 <epim> I'm willing to take some blueprints on
19:21:13 <GheRivero> i'll take a deeper look and step in
19:21:32 <devananda> awesome
19:22:12 <devananda> so, there are two unassigned right now
19:22:19 <devananda> #link https://blueprints.launchpad.net/ironic/+spec/equivalent-pxe-driver
19:22:25 <devananda> #link https://blueprints.launchpad.net/ironic/+spec/autobuild-api-docs
19:22:51 <devananda> pxe-driver is probably going to take a bit of work
19:23:00 <devananda> refactoring the nova bareetal pxe driver to not depend on nov so much :)
19:23:14 <devananda> the other one is all about automatically generating API docs
19:23:24 <devananda> based on the implementation of the API
19:23:34 <devananda> Ceilometer is already doing this, i believe
19:24:12 <devananda> Ironic's v1 api is here: https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1.py
19:24:40 <devananda> epim, GheRivero: either of those look interesting?
19:25:02 <epim> I'm checking to see if one of our rockstar coders has some cycles to take on one of those
19:25:48 <devananda> epim: ack, thanks
19:25:50 <GheRivero> i can go with the pxe, but it will be my first bp, so maybe itis a bit "risky"
19:26:12 <lucasagomes> I'm not familiar with sphinx at all... but I can take a look
19:26:50 <anteaya> I'd love to jump in but I can't right now
19:27:17 <devananda> ok! sounds like we have volunteers :)
19:27:20 * devananda assigns BPs
19:28:18 <linggao> devananda: I see there is node create and node delete. how about node change function for updating a certain field of a node.
19:28:40 <linggao> I did not see it in nova, is there a reason for this?
19:28:55 <devananda> GheRivero: I'm happy to help with the PXe one as you go. it may be goo for me to break that down into a few smaller BPs
19:29:05 <devananda> linggao: there is PUT, which is for updates
19:29:27 <devananda> linggao: or if not, there should be :)
19:29:46 <linggao> devananda: ok.
19:30:09 <devananda> linggao: the reason in nova is that updating certain fields of a node would require rebuilding the deployed instance anyway, so that driver only implements create/delete
19:30:22 <devananda> that may be true for some fields of Ironic as well
19:31:14 <devananda> but hopefully most of that info will be factored out to the BMC and Deploy drivers, not part of the Node itself
19:31:22 <devananda> anyway, sounds like we're done with BPs
19:31:26 <devananda> #topic open discussion
19:31:26 <linggao> devananda: sometimes I typed in wrong info in nova , mac for example, but had to delete the node and then recreate it.
19:32:07 <devananda> so, anyone have anything to talk about? feel free to jump in
19:32:47 <sdague> hey, so just circling back around to the API again, I think it would be good to get some sample json for the resources on get or post or whatever
19:32:49 <linggao> how about node discovery function?  like find and mac for a node.
19:32:55 <anteaya> how are new folks feeling about posting reviews for patches?
19:33:20 <devananda> sdague: yes! actually i have that laying around... heh ...
19:33:20 <sdague> because I'm not sure I understand some of the resource structure just based on url right now (I also just stuck that in my review)
19:33:38 <devananda> #action devananda to post sample JSON for some API calls
19:33:55 <devananda> sdague: also we need to add unit tests for the api
19:34:10 <devananda> right now i think the only unit tests are for the db layer and ipmi driver
19:35:07 <devananda> linggao: yes, the API does need to support some sorts of query or find functionality.
19:36:08 <devananda> sdague: there's some sample JSON inline, eg. https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1.py#L119
19:37:17 <sdague> devananda: yeh, I'd just go with poking it into the doc as a scratch space
19:37:28 <sdague> ok, got to run off to another meeting
19:37:43 <devananda> thanks for joining us o/
19:38:27 <devananda> linggao: any thoughts on what fields should be searchable?
19:40:36 <linggao> jbjohnson: can you help on this one?
19:40:50 <jbjohnso> I wasn't here for the question
19:40:56 <linggao> this is for node discovery function.
19:41:17 <linggao> like find the mac address of a node.
19:41:21 <jbjohnso> oh, so finding things by server enclosure and/or switch port (for enclosure and standalone systems respectively)
19:41:23 <devananda> was your question about discovery or just searching for a known resource?
19:41:25 <linggao> what else can also be find out?
19:41:51 <linggao> it is for discovery.
19:41:57 <devananda> ah, i misunderstood
19:42:00 <jbjohnso> first come first serve, manual (with hints), switch, and enclosure
19:42:28 <jbjohnso> switch workable pretty easily with arbitrary snmp switches (nicely standard)
19:42:39 <jbjohnso> enclosures would have to be proprietary
19:42:43 <devananda> so that's a great topic unto itself
19:43:10 <devananda> right now the API doesn't include anything for hw discovery
19:43:12 <jbjohnso> the fun part comes in the secret exchange in a fashion not vulnerable remotely
19:43:33 <devananda> jbjohnso: would you want discovery exposed via a public API?
19:44:00 <devananda> i would think it should be a mgmt-only function, usually not enabled, and only run on-demand to generate and feed data into the Ironic DB
19:44:20 <jbjohnso> devananda, still need my sea legs....
19:44:32 <jbjohnso> in an alternate life, we had a fairly common api for vm and baremetal
19:44:42 <devananda> as do we
19:44:52 <devananda> Nova exposes a public API for controlling VMs
19:44:59 <jbjohnso> firmware inventory analagous to software inventory
19:45:26 <jbjohnso> hardware inventory just a bit more detailed (e.g. we can tell you about heat spreaders on individual dimms in baremetal, not applicable to virt..
19:45:32 <devananda> that API talks to various drivers (xen, libvirt, etc)
19:45:35 <devananda> one of those drivers will be Ironic
19:46:12 <devananda> so most of the control of instances, power state, etc, will be going like this
19:46:30 <jbjohnso> the list of things a vm can do not applicable to baremetal is short (live migration, software defined hotplug, etc)
19:46:39 <devananda> user -> nova-api -> nova-copute (ironic-client) -> ironic-api -> ironic-manager (drivers doing things)
19:46:50 <jbjohnso> the converse would be.... well, mostly things of interest to the operator
19:46:56 <devananda> exactly
19:47:00 <jbjohnso> like shine a blue light
19:47:24 <devananda> so ironic-api will expose both the set of things nova needs, and the set of things operators of baremetal need
19:47:58 <devananda> and the ironic driver within nova-compute will implement a subset of the functionality of most drivers (eg, it probably wont have resize or live-migrate)
19:48:04 <jbjohnso> ok, to calibrate ambitiouns, the aim to be able to do all hardware management, or just the minimum for os deployment?
19:48:14 <devananda> initially, just the minimum
19:48:39 <devananda> jbjohnso: this might help: https://github.com/openstack/ironic/blob/master/README.rst#project-architecture
19:48:54 <devananda> note the second para on driver functionality
19:49:10 <jbjohnso> in general, are api calls targetting just one instance or potentially many?
19:49:21 <devananda> initially, just one
19:49:21 <jbjohnso> e.g. can I make one request to 1,000 servers, or 1,000 requests?
19:49:33 <devananda> orchestration should be happening at a hgher layer
19:49:39 <devananda> probably Heat
19:49:46 <lucasagomes> sorry folks I gotta go, I'll leave it open so I can read all the logs later
19:49:58 <jbjohnso> set boot device, power control, get health summary
19:50:04 <devananda> so if my Heat template says "deploy 1000 things", that'll get passed down through Nova to Ironic
19:50:08 <jbjohnso> the health summary being a strong 'maybe'
19:50:31 <jbjohnso> since before you can do health summary in an industry standard fashion, you need to have a pretty thorough grasp of the hard stuff.
19:51:22 <jbjohnso> can have 'sensor', 'eventlog' and 'inventory' type data
19:51:28 <linggao> but 1000 things will result into 1000 calls to ironic.
19:51:32 <jbjohnso> but I could see waiting...
19:51:40 <devananda> the reason I'd like us to focus on single-node minimum-set is that
19:51:49 <linggao> will this cause performance problem?
19:51:53 <devananda> once Ironic matches the functionality of the current baremetal driver
19:52:03 <devananda> we can stop maintaining the functionality in two code bases :)
19:52:11 <linggao> like jbjohnson said, you can reuse the ipmi session.
19:52:44 <jbjohnso> so the python implementation under the covers will be capable of a number of things under the covers, but at the high layer we can stick to this for now...
19:52:46 <devananda> that said, I think there is clearly a place for multi-node operations
19:52:53 <devananda> jbjohnso: ++
19:53:18 <devananda> eventually, i'm sure we will want to expose functionality to operators that isn't directly fit within the current openstack model
19:53:32 <devananda> though, that said, there is a lot of discussion and work ongoing
19:53:34 <devananda> in openstack
19:53:39 <devananda> for cross-project orchestration and scheduling
19:53:43 <devananda> which is where all this really fits in, IMHO
19:54:03 <jbjohnso> and that might drag with it... interesting performance implications, but if all else fails just rewrite from scratch ;)
19:54:27 <devananda> eg, to say to heat might know that it wants to start 1000 nodes, but it might become smart enough to send one command to Ironic to set boot device on all of them at once.
19:54:31 <devananda> or something like that :)
19:55:14 <jbjohnso> I can't criticize people on apis, I never made a perfoect one anyway
19:56:01 <devananda> so, we're also just about out of time
19:56:17 <devananda> i think we can move this conversation back to #openstack-ironic
19:56:56 <devananda> thanks everyone!
19:57:04 <devananda> see you next time :)
19:57:16 <linggao> thanks you devananda for all the input.
19:57:22 <NobodyCam> TY devananda
19:57:56 <devananda> #endmeeting