19:02:01 #startmeeting ironic 19:02:02 Meeting started Mon May 20 19:02:01 2013 UTC. The chair is devananda. Information about MeetBot at http://wiki.debian.org/MeetBot. 19:02:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 19:02:05 The meeting name has been set to 'ironic' 19:02:36 firstly, for anyone not used to these sort of meetings 19:02:40 there's an agenda :) 19:02:45 #link https://wiki.openstack.org/wiki/Meetings/Ironic 19:03:07 #topic API specifications 19:03:22 so, the background here is 19:03:35 a few people from different companies have said "hey, how can we interoprate with Ironic" 19:03:45 and I said, "we dont have a well-defined API *yet*" 19:03:59 and I think we should probably iron that out soon 19:04:16 anyone have strong opinions on that? 19:04:21 or on what the API should look like? 19:04:24 Woo, meeting time. :) 19:04:33 I agree we should have an api 19:04:52 and it should be versioned! 19:04:55 API is a must do 19:05:00 o/ <- here 19:05:02 regarding how it looks, what kind of expectations do the different companies have? 19:05:07 well, yes, we must have an API :) 19:05:36 devananda: so was there a natural bit inside the baremetal driver that would make sense to start to expose as rest resources/ 19:05:36 I am plalying with nova-baremetal from greezly. So one thing I would like to see in ironic api is that it can handle mutiple nodes with one command. 19:05:37 ? 19:05:37 and yes, it should probably be versioned 19:05:58 sdague: there was an extension to Nova's API 19:06:08 like boot node1,node2... 19:06:14 which exposed create/show/delete for nodes and interfaces 19:06:16 devananda: so I'd start with that, just as a place to grow from 19:06:26 sdague: that was my initial thinking 19:06:32 oh! i should probably say 19:06:37 #link https://raw.github.com/openstack/ironic/master/ironic/doc/api/v1.rst 19:06:47 that is a sketch of a possible API 19:07:00 not at all set in stone. just a straw-man for discussion 19:07:18 looks like a reasonable starting point 19:07:43 doh 19:07:46 my suggestions,it should be 19:07:47 sorry late 19:08:07 /node//ifaces/ instead of /node/ifaces/ 19:08:34 actually, the same for all those deeper resources 19:08:35 and the same for image, power... 19:08:40 yep 19:08:42 devananda: good start on that sketch, I think we should have the ability to mark a node as unavailable. If hardware issues are detected you can mark it as unavailable to scheduling until it's repaired 19:08:57 sdague: thanks 19:09:04 devananda: you want suggestions as patch reviews? 19:09:12 that provides voting etc 19:09:18 sdague: that'd be great :) 19:09:57 epim: 'unavailable 'seems like a state or flag on a node, not a separate API call 19:10:02 ok, you can action me to provide some proposed suggestions 19:10:12 epim: eg, PUT /node/ {'state': 'unavailable'} or such 19:10:33 #action sdague to provide API changes in a review 19:10:35 I would also try to take a page out of the book for the nova v3 rewrite on how they are doing extensions, that would give you flexibility to add between major revs 19:10:57 all the commands for node should be able to handle more than one node, or a group of node. this is very important for a large clound. 19:11:06 sdague: ah, i haven't been following the v3 rewrite closely enough 19:11:17 though i have been thinking about the versioned internal objects work that Nova is doing 19:11:26 and would like to have the groundwork for something like that in place, too 19:11:29 Yup, i'd go so far as PUT /node/ {'state': 'unavailable', 'reason' : 'bad memory, repair order XYZ123 filed'} 19:12:10 epim: so that's another part of the API which isn't drafted yet -- what the JSON structures are, what fields are required, etc 19:12:12 devananda: also, would you mind moving doc to the root layer? 19:12:23 most other projects do that, and it's just a little easier to find 19:12:29 #action devananda to move ironic/doc/ to doc/ 19:12:33 I can propose that as a review 19:12:56 sure. either one of us can :) 19:13:07 linggao: so actually, i disagree about node ranges 19:13:17 linggao: Ironic is an API for the hardware control 19:13:19 that makes sense, description fields like that should be optional 19:13:36 linggao: scheduling and orchestration is happening at a higher layer, ie in Nova or Heat 19:13:43 I'd start with a resource api, and add a bulk api later 19:13:47 there should be a whole meta field about a given host, actually. memory, cpu, disk, mfr, serial no, etc 19:13:55 ah, I see. 19:14:36 epim: yes. initially, that field has to match what Nova expects -- # CPUs, RAM, local storage, and CPU arch 19:15:43 epim: other data which is not useful to Nova (not yet, anyway) eg. serial #, asset tag, PCI tree, etc, could be stored in a JSON-formatted TEXT field and exposed 19:16:47 #action devananda to look at nova's v3 api rewrite and see if we can learn from that 19:17:12 any thing else on APIs? 19:17:36 i'll submit my ideas as a patch 19:17:45 sounds good :) 19:17:50 moving on... 19:17:54 #topic blueprints 19:18:00 #link https://blueprints.launchpad.net/ironic 19:18:08 there are a few 19:18:17 and i expect we'll end up with more 19:18:21 good 19:18:30 so anyone's welcome to post blueprints 19:18:41 i'm prioritizing based on matching the functionality in the nova baremetal driver 19:18:43 i've started to look at the VPD b[ 19:18:50 bp 19:19:17 some of these, like the manager BP, may get subdivided 19:20:02 i'd like to see all the high & essential stuff assigned at some point :) 19:20:13 any takers? 19:21:12 I'm willing to take some blueprints on 19:21:13 i'll take a deeper look and step in 19:21:32 awesome 19:22:12 so, there are two unassigned right now 19:22:19 #link https://blueprints.launchpad.net/ironic/+spec/equivalent-pxe-driver 19:22:25 #link https://blueprints.launchpad.net/ironic/+spec/autobuild-api-docs 19:22:51 pxe-driver is probably going to take a bit of work 19:23:00 refactoring the nova bareetal pxe driver to not depend on nov so much :) 19:23:14 the other one is all about automatically generating API docs 19:23:24 based on the implementation of the API 19:23:34 Ceilometer is already doing this, i believe 19:24:12 Ironic's v1 api is here: https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1.py 19:24:40 epim, GheRivero: either of those look interesting? 19:25:02 I'm checking to see if one of our rockstar coders has some cycles to take on one of those 19:25:48 epim: ack, thanks 19:25:50 i can go with the pxe, but it will be my first bp, so maybe itis a bit "risky" 19:26:12 I'm not familiar with sphinx at all... but I can take a look 19:26:50 I'd love to jump in but I can't right now 19:27:17 ok! sounds like we have volunteers :) 19:27:20 * devananda assigns BPs 19:28:18 devananda: I see there is node create and node delete. how about node change function for updating a certain field of a node. 19:28:40 I did not see it in nova, is there a reason for this? 19:28:55 GheRivero: I'm happy to help with the PXe one as you go. it may be goo for me to break that down into a few smaller BPs 19:29:05 linggao: there is PUT, which is for updates 19:29:27 linggao: or if not, there should be :) 19:29:46 devananda: ok. 19:30:09 linggao: the reason in nova is that updating certain fields of a node would require rebuilding the deployed instance anyway, so that driver only implements create/delete 19:30:22 that may be true for some fields of Ironic as well 19:31:14 but hopefully most of that info will be factored out to the BMC and Deploy drivers, not part of the Node itself 19:31:22 anyway, sounds like we're done with BPs 19:31:26 #topic open discussion 19:31:26 devananda: sometimes I typed in wrong info in nova , mac for example, but had to delete the node and then recreate it. 19:32:07 so, anyone have anything to talk about? feel free to jump in 19:32:47 hey, so just circling back around to the API again, I think it would be good to get some sample json for the resources on get or post or whatever 19:32:49 how about node discovery function? like find and mac for a node. 19:32:55 how are new folks feeling about posting reviews for patches? 19:33:20 sdague: yes! actually i have that laying around... heh ... 19:33:20 because I'm not sure I understand some of the resource structure just based on url right now (I also just stuck that in my review) 19:33:38 #action devananda to post sample JSON for some API calls 19:33:55 sdague: also we need to add unit tests for the api 19:34:10 right now i think the only unit tests are for the db layer and ipmi driver 19:35:07 linggao: yes, the API does need to support some sorts of query or find functionality. 19:36:08 sdague: there's some sample JSON inline, eg. https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1.py#L119 19:37:17 devananda: yeh, I'd just go with poking it into the doc as a scratch space 19:37:28 ok, got to run off to another meeting 19:37:43 thanks for joining us o/ 19:38:27 linggao: any thoughts on what fields should be searchable? 19:40:36 jbjohnson: can you help on this one? 19:40:50 I wasn't here for the question 19:40:56 this is for node discovery function. 19:41:17 like find the mac address of a node. 19:41:21 oh, so finding things by server enclosure and/or switch port (for enclosure and standalone systems respectively) 19:41:23 was your question about discovery or just searching for a known resource? 19:41:25 what else can also be find out? 19:41:51 it is for discovery. 19:41:57 ah, i misunderstood 19:42:00 first come first serve, manual (with hints), switch, and enclosure 19:42:28 switch workable pretty easily with arbitrary snmp switches (nicely standard) 19:42:39 enclosures would have to be proprietary 19:42:43 so that's a great topic unto itself 19:43:10 right now the API doesn't include anything for hw discovery 19:43:12 the fun part comes in the secret exchange in a fashion not vulnerable remotely 19:43:33 jbjohnso: would you want discovery exposed via a public API? 19:44:00 i would think it should be a mgmt-only function, usually not enabled, and only run on-demand to generate and feed data into the Ironic DB 19:44:20 devananda, still need my sea legs.... 19:44:32 in an alternate life, we had a fairly common api for vm and baremetal 19:44:42 as do we 19:44:52 Nova exposes a public API for controlling VMs 19:44:59 firmware inventory analagous to software inventory 19:45:26 hardware inventory just a bit more detailed (e.g. we can tell you about heat spreaders on individual dimms in baremetal, not applicable to virt.. 19:45:32 that API talks to various drivers (xen, libvirt, etc) 19:45:35 one of those drivers will be Ironic 19:46:12 so most of the control of instances, power state, etc, will be going like this 19:46:30 the list of things a vm can do not applicable to baremetal is short (live migration, software defined hotplug, etc) 19:46:39 user -> nova-api -> nova-copute (ironic-client) -> ironic-api -> ironic-manager (drivers doing things) 19:46:50 the converse would be.... well, mostly things of interest to the operator 19:46:56 exactly 19:47:00 like shine a blue light 19:47:24 so ironic-api will expose both the set of things nova needs, and the set of things operators of baremetal need 19:47:58 and the ironic driver within nova-compute will implement a subset of the functionality of most drivers (eg, it probably wont have resize or live-migrate) 19:48:04 ok, to calibrate ambitiouns, the aim to be able to do all hardware management, or just the minimum for os deployment? 19:48:14 initially, just the minimum 19:48:39 jbjohnso: this might help: https://github.com/openstack/ironic/blob/master/README.rst#project-architecture 19:48:54 note the second para on driver functionality 19:49:10 in general, are api calls targetting just one instance or potentially many? 19:49:21 initially, just one 19:49:21 e.g. can I make one request to 1,000 servers, or 1,000 requests? 19:49:33 orchestration should be happening at a hgher layer 19:49:39 probably Heat 19:49:46 sorry folks I gotta go, I'll leave it open so I can read all the logs later 19:49:58 set boot device, power control, get health summary 19:50:04 so if my Heat template says "deploy 1000 things", that'll get passed down through Nova to Ironic 19:50:08 the health summary being a strong 'maybe' 19:50:31 since before you can do health summary in an industry standard fashion, you need to have a pretty thorough grasp of the hard stuff. 19:51:22 can have 'sensor', 'eventlog' and 'inventory' type data 19:51:28 but 1000 things will result into 1000 calls to ironic. 19:51:32 but I could see waiting... 19:51:40 the reason I'd like us to focus on single-node minimum-set is that 19:51:49 will this cause performance problem? 19:51:53 once Ironic matches the functionality of the current baremetal driver 19:52:03 we can stop maintaining the functionality in two code bases :) 19:52:11 like jbjohnson said, you can reuse the ipmi session. 19:52:44 so the python implementation under the covers will be capable of a number of things under the covers, but at the high layer we can stick to this for now... 19:52:46 that said, I think there is clearly a place for multi-node operations 19:52:53 jbjohnso: ++ 19:53:18 eventually, i'm sure we will want to expose functionality to operators that isn't directly fit within the current openstack model 19:53:32 though, that said, there is a lot of discussion and work ongoing 19:53:34 in openstack 19:53:39 for cross-project orchestration and scheduling 19:53:43 which is where all this really fits in, IMHO 19:54:03 and that might drag with it... interesting performance implications, but if all else fails just rewrite from scratch ;) 19:54:27 eg, to say to heat might know that it wants to start 1000 nodes, but it might become smart enough to send one command to Ironic to set boot device on all of them at once. 19:54:31 or something like that :) 19:55:14 I can't criticize people on apis, I never made a perfoect one anyway 19:56:01 so, we're also just about out of time 19:56:17 i think we can move this conversation back to #openstack-ironic 19:56:56 thanks everyone! 19:57:04 see you next time :) 19:57:16 thanks you devananda for all the input. 19:57:22 TY devananda 19:57:56 #endmeeting