19:00:27 <devananda> #startmeeting ironic
19:00:28 <openstack> Meeting started Mon Jun 24 19:00:27 2013 UTC.  The chair is devananda. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:00:29 <anteaya> o/
19:00:30 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:00:32 <openstack> The meeting name has been set to 'ironic'
19:00:50 <dkehn> o/
19:00:54 <devananda> #topic agenda
19:00:59 <devananda> #link https://wiki.openstack.org/wiki/Meetings/Ironic
19:01:15 <JimJiang1> hi all
19:01:15 <devananda> agenda looks pretty similar, though i see one new thing
19:01:25 <NobodyCam> :)
19:01:39 <devananda> and just a reminder, everyone's welcome to add items to the agenda any time :)
19:01:58 <devananda> #topic object models
19:02:41 <devananda> we're almost done with these
19:02:52 <devananda> node and port are in tree
19:02:53 <romcheg_> yup
19:02:57 <anteaya> yay
19:03:02 <romcheg_> and chassis
19:03:04 <devananda> and hassis
19:03:07 <devananda> :)
19:03:11 <NobodyCam> great work everyone!
19:03:38 <devananda> there's a review up for driver model (thanks JimJiang1 )
19:03:41 <devananda> #link https://review.openstack.org/#/c/33920/
19:04:04 <devananda> and i'd like us to take a few minuets to talk about the long term functionality of a "driver" object model
19:04:45 <devananda> i have some ideas, which i'll try to type quickly, butquestions and opinions are welcome, of course
19:05:25 <devananda> as i see it, a driver object would necessarily represent a record in the db, and so we have a 1:1 relationship between db record and a "driver"
19:05:40 <devananda> but we could have multiple instances of the same driver running, eg on different manager (conductor) hosts
19:06:04 <devananda> also, what will the driver object / db record actually be used for?
19:06:43 <devananda> via the API, we'll need to expose some information _about_ drivers. such as what driver_info properties they require, but that doesn't require a db record...
19:06:46 <romcheg_> that's the question I also have for this meeting
19:08:03 <romcheg_> driver info can be taken from settings
19:08:09 <romcheg_> I guess
19:08:12 <devananda> right
19:08:42 <linggao> I have questions on the drivers too.  there may be different kind of bareemtal nodes in a cloud, should the drive info be node based?
19:08:51 <devananda> there could be an assumption (or a requirement) that driver configuration is consistent across a cluster
19:09:11 <jbjohnso> devananda, how much of the configuration?  including secrets?
19:09:32 <devananda> ah, let me rephrase my last sentence
19:09:47 <devananda> s/driver configuration/static configuration, eg. what is in the config file/
19:09:56 <devananda> i didn't mean the per-node driver info :)
19:10:22 <devananda> linggao: ironic supports multiple drivers simultaneously
19:10:55 <devananda> linggao: so if, for example, you had some iLo and some DRAC hardware (and drivers for those existed, which they do not today) you could easily run both hardware in the same ironic cluster
19:11:06 <devananda> (taht's one of our primary goals)
19:11:40 <devananda> so we have driver configuration (eg in ironic.conf) and we have per-node driver_info (stored in the database)
19:11:41 <linggao> cool. but where is the info stored for the node?
19:11:57 <devananda> linggao: in db: ironic.nodes.driver_info field
19:12:17 <linggao> do we have db schema defined somewhere?
19:12:22 <devananda> yes
19:12:31 <devananda> ironic/db/sqlalchemy/*
19:13:19 <NobodyCam> would the drivers object allow quicker searching.. ie can this conductor (manager) handle a drac node?
19:13:54 <devananda> "can this conductor" -- here I think you're exposing a detail that the API shouldn't expose
19:14:09 <devananda> "can this ironic API handle a drac node" -- that we should expose
19:14:17 <romcheg> devananda: +1
19:14:26 <devananda> and then internally, the API may need to route the requiest to a conductor which can handle that node
19:14:27 <JimJiang1> +1
19:14:51 <devananda> but I dont think we need to expose individual conductor (or driver) instances outside of the API
19:14:57 <devananda> s/need/should/
19:15:46 <romcheg> That's still not clear for me why do we need a db record for a driver
19:15:46 <NobodyCam> then seems we are leaning away from "driver" object
19:16:04 <devananda> so, do we really need a db object for drivers? or can we just use RPC fan-out between API and Conductor when we need to keep them aware of eachother's capabilities?
19:16:24 <devananda> ok... sounds like all 3 of us are leaning away from driver db/object :)
19:16:46 <romcheg> devananda: exactly :)
19:17:37 <devananda> JimJiang1: i realize you implemented something based on a BP that I approved -- and we just said "no" to it. I'm sorry -- your patch is good though :)
19:18:01 <devananda> #action devananda to review all the BP's more carefully
19:18:17 <romcheg> JimJiang1: don't give up friend :)
19:18:22 * NobodyCam notes the "HARD HAT REQUIRED"
19:18:50 <devananda> moving on
19:18:51 <JimJiang1> ok:)
19:18:54 <devananda> #topic API and RPC stuff
19:19:06 <romcheg> I had a proposal about this today
19:19:16 <romcheg> When all of you were sleeping
19:19:22 <anteaya> :D
19:19:23 <devananda> :)
19:19:53 <romcheg> I think we should create an api.controllers.v1 package, just like we did with the objects
19:20:14 <devananda> I was just going to point out that I landed a reworked ironic/api/controllers/v1.py based on the node object model
19:20:14 <romcheg> Otherwise we will have a loooong module
19:20:25 <devananda> along with a basic framework for unit testing the API
19:20:37 <devananda> and romcheg, I totally agree. that was going to be my next thing :)
19:21:24 <devananda> I was hoping martyn would be around to start working on the API, but i think he was on vacation for a while?
19:21:34 <romcheg> devananda: I was going to start working on api, so please publish that as soon as you done it
19:21:43 <devananda> romcheg: it's done
19:22:06 <romcheg> Ah, haven't seen the computer last few hours
19:22:12 <devananda> #link https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1.py#L50
19:22:20 <devananda> romcheg: i landed it late last week
19:22:52 <devananda> I am working on adding the RPC methods for certain node actions that require lock coordination
19:22:55 <romcheg> Ah, no, I don't mean that
19:22:59 <devananda> for example "update"
19:23:01 <devananda> oh?
19:23:25 <romcheg> Currently we have a v1.py module which well contain all the controllers, right?
19:24:02 <devananda> well, currently, yes, but i think refactoring that to a v1/{nodes,interfaces,etc}.py module is fine
19:24:14 <romcheg> That's what I mean
19:24:25 <devananda> i haven't started on that refactoring :)
19:24:28 <devananda> i think it's a great idea
19:25:44 <NobodyCam> is that a action item?
19:25:51 <devananda> sure
19:25:55 <romcheg> yup
19:26:12 <devananda> #action romcheg to refactor api/controllers/v1.py into a more maintainable modular structure
19:26:15 <devananda> also
19:26:51 <devananda> #action devananda to implement RPC layer for API actions that require lock management
19:26:56 <devananda> #link https://review.openstack.org/#/c/34115/
19:27:03 <devananda> is an initial draft. no tests yet...
19:27:31 <devananda> but basically, some API actions need to be passed to conductor, or else we get nasty race conditions (like updating a node while a conductor is deploying it!)
19:27:37 <devananda> so i'm working on that
19:27:54 <devananda> any quesitons on api/rpc stuff?
19:28:26 <devananda> #topic image utils & pxe driver
19:28:27 <romcheg> Currently it's quite clear for me
19:28:49 <devananda> GheRivero: looks like you're making good progress!
19:29:28 <GheRivero> image utils are on the waiting queue... maybe a couple of tests needed and the split of the patch with the needed openstack/common out
19:29:32 <linggao> by image utils, do you mean diskimage_builder?
19:29:54 <linggao> or it is in ironic?
19:30:08 <romcheg> I think that's a separate project yet
19:30:18 <GheRivero> linggao: a servive/wrapper around python glance client to retrieve the images,kernels,ids needed by pxe
19:30:25 <devananda> linggao: no. i mean this: https://blueprints.launchpad.net/ironic/+spec/image-tools
19:31:09 <devananda> GheRivero: awesome. anything I can do to help move the glanceclient patch?
19:31:37 <GheRivero> maybe knocking some doors to have more reviews
19:31:47 <jbjohnso> as far as logistics of getting data to node, is the general level of expectation linux-only, no uefi, tftp down kernel and initrd?
19:32:15 <devananda> #action devananda to get more eyes on the glanceclient image-tools patch
19:32:31 <devananda> jbjohnso: for the initial release, yes
19:32:52 <jbjohnso> ok, keep in mind that the boot filename will likely warrant being changeable
19:32:58 <jbjohnso> and conditional on dhcp request
19:33:09 <NobodyCam> jbjohnso: Ironic will support >1 method later on
19:33:10 <devananda> jbjohnso: other methods are quite interesting, but this is already well understood in this space, and we have a working codebase in the nova-baremetal driver
19:33:35 <jbjohnso> devananda, we have nic vendors that cannot pxe boot in 'BIOS' mode fyi..
19:33:55 <devananda> jbjohnso: in nova-baremetl, boot filename is keyed by a combination of the nova instance UUID (which is passed to the machine via DHCP BOOT) and the MAC addresses of all the physical NICs of that machine
19:34:26 <devananda> jbjohnso: interesting, but i dont think that directly impacts our workflow
19:34:33 <jbjohnso> devananda, but that means the server must know ahead of time whether the node will attempt uefi or pxe boot, it may be best to have the payload adaptive to whowever node behaves
19:34:48 <jbjohnso> just food for thought
19:34:53 <devananda> ack
19:34:58 <GheRivero> noted
19:35:29 <jbjohnso> devananda, I would paste the generated isc dhcp stuff that xcat makes
19:35:44 <jbjohnso> but too lazy to pastebin and not evil enough to subject irc to it
19:35:48 <devananda> when we start implementing uefi and/or ipxe, we'll have to consider such things
19:35:54 <jbjohnso> but  elsif option client-architecture = 00:07
19:35:57 <jbjohnso> etc etc etc
19:36:30 <anteaya> jbjohnso: thanks for being the right amount of evil
19:36:31 <linggao> jbjohnso and devananda: is that what node.driverinfo is for in the db?
19:36:39 <devananda> jbjohnso: i'm hoping someone with more knowledge than I will dig into the ironic code at that point and add it ;)
19:37:05 <devananda> linggao: node.driver_info in the db is for things like the IPMI credentials, PXE image sources, and so on
19:37:35 <devananda> basically, information which is specific to that driver, that other drivers may not need, and therefor is not a standard requirement of Ironic itself
19:37:39 <jbjohnso> another thing, I noted that nodes with identical contents currently seem to copy the same initrd/kernel over and over?
19:37:41 <linggao> but does it also define what kind drive a node will use?
19:37:49 <jbjohnso> e.g. deploying 80 nodes means 80 dupe copies of kernel and initrd?
19:37:54 <jbjohnso> on the server that is
19:37:59 <devananda> jbjohnso: yes :(
19:38:01 <GheRivero> jbjohnso: yeah... for now.
19:38:08 <GheRivero> it-s on the ToDo list
19:38:19 <devananda> jbjohnso: there are some notes from GheRivero in his patch about optimizing that. ^^ :)
19:38:27 <jbjohnso> use xCAT ;)
19:39:06 <devananda> ok, before we run out of time, let's move on. we can talk more about PXE in open discussion :)
19:39:08 <jbjohnso> ok
19:39:16 <GheRivero> ok
19:39:19 <devananda> #topic ironic diskimage-builder element
19:39:26 <devananda> NobodyCam: you're up! how's it going?
19:39:46 <NobodyCam> just wanted to get it out there that I am working on this
19:40:03 <NobodyCam> I am working on getting manager to standup
19:40:07 <anteaya> yay NobodyCam
19:40:19 <NobodyCam> api starts in the logs.. but is untested
19:40:46 <NobodyCam> I figure just about the time I get it all working the conductor patch will land
19:40:59 <devananda> heh
19:41:18 <NobodyCam> but we should have a dib element to start an ironic server
19:41:46 <anteaya> well done
19:41:51 <NobodyCam> 1 release is aimed at linking to the TripleO boot stack element
19:42:03 <NobodyCam> s/1/1st/
19:42:10 <devananda> one of the things i'm particularly eager to see come from that is linking ironic with keystone auth.
19:42:43 <NobodyCam> :)
19:42:51 <devananda> right now, ironic API has no auth, and there's no access control implemented at the conductor or db layers yet
19:43:47 <NobodyCam> `keystone-client auth $USER $TOKEN|echo $?`
19:43:51 <NobodyCam> wont cut it?
19:44:03 <NobodyCam> :-p
19:44:04 <devananda> :p
19:44:20 <romcheg> :)
19:44:51 <devananda> actually, i think a simple "require all API requests to be from a valid OpenStack admin account" is sufficient for access control. no non-admin should be running ironic commands directly, and we can allow nova to temporarily escalate permissions when it deploys a node
19:45:28 <devananda> so i think just getting that in the API is a good start :)
19:45:47 <devananda> *that = validating the supplied keystone token
19:46:14 <devananda> #topic open discussion
19:46:20 <romcheg> Can take a look at keystone
19:46:26 <romcheg> *I
19:47:01 <NobodyCam> open discussion : review 34132
19:47:07 <linggao> question: ironic/drivers/modules directory, what those files are for?
19:47:23 <linggao> are they temporary?
19:47:33 <NobodyCam> none of the other procject include tox or testr ...
19:47:55 <linggao> pxe.py is  under  ironic/drivers and  ironic/drivers/modules
19:48:03 <romcheg> NobodyCam: Yes, that was my concern
19:48:16 <NobodyCam> linggao: no that is where module code lives.. such as ssh
19:48:19 <romcheg> Tests can be run without those files
19:48:44 <NobodyCam> romcheg: other also work from command line
19:48:49 <jbjohnso> my python ipmi implementation is in a temporary public home: https://sourceforge.net/p/xcat/python-ipmi/ci/master/tree/
19:49:06 <devananda> #link http://docs.openstack.org/developer/ironic/api/ironic.drivers.base.html
19:49:17 <devananda> linggao: that doc ^ describes the driver interfaces
19:49:20 <jbjohnso> if anyone wants to comment and either like it or laugh mercilessly at it
19:49:31 <linggao> if I write a power driver for jbjohnso's native ipmi, where should it be checked under?
19:49:39 <devananda> linggao: tl;dr a driver implements a set of interfaces. each driver/module/ implements one (or more) interfaces
19:49:57 <anteaya> jbjohnso: I don't think we are a laugh mercilessly kind of crowd
19:50:02 <devananda> linggao: so a native_ipmi power driver would be created eg, ironic/drivers/modules/native_ipmi.py
19:50:03 <jbjohnso> I have it on good authority that my python code resembles perl too strongly
19:50:16 <jbjohnso> :)
19:50:22 <anteaya> :)
19:50:25 <devananda> linggao: so a native_ipmi power driver *interface* would be created eg, ironic/drivers/modules/native_ipmi.py
19:50:48 <NobodyCam> jbjohnso: /mine resembles FOXPRO code I've been told
19:50:50 <devananda> linggao: and then you would add those interfaces to driver classes, eg, ironic/drivers/native_ipmi_pxe.py
19:51:39 <devananda> speaking of the native_ipmi driver...
19:51:52 <devananda> #action devananda to create stackforge repo and import the native ipmi driver from sourceforge
19:51:53 <jbjohnso> ipmi_syncexample.py is something to peruse
19:52:01 <jbjohnso> ipmi_command.py should be mostly readable
19:52:04 <devananda> #action devananda to create stackforge repo and import the native ipmi library from sourceforge
19:52:08 <jbjohnso> ipmi_session.py.... there be dragons...
19:52:34 <devananda> jbjohnso: have you created a gerrit account?
19:52:39 <jbjohnso> devananda, yes
19:52:54 <jbjohnso> devananda, all that should be in order, I'm an officially blessed openstack contributor
19:53:51 <devananda> jbjohnso: great. what's your name in gerrit? i want to make sure you're -core for the ipmi library
19:54:06 <devananda> a few quick guesses and i haven't found you yet
19:54:45 <devananda> jbjohnso: ok, msg me after the meeting :)
19:54:46 <romcheg> jbjohnso: Welcome to the family http://risovach.ru/upload/2012/11/generator/krestnyy-otec_4556647_orig_.jpeg
19:55:07 <devananda> LOL
19:55:09 <jbjohnso> devananda, logged into review.openstack.org as jbjohnso@us.ibm.com
19:55:20 <devananda> ack, ty
19:55:30 <jbjohnso> devananda, maybe I hadn't logged in there yet...
19:55:56 <devananda> jbjohnso: searching for users in gerrit is painful if you dont know their _exact_ address
19:55:58 <jbjohnso> devananda, anyway, appreciate it, give me a place to git remote add origin and I'll push
19:56:01 <NobodyCam> five minute bell
19:56:36 <devananda> oh, two quick announcements from me :)
19:56:44 <devananda> 1 - i have a patch up to rename "manager" to "conductor"
19:56:52 <devananda> shouldn't be a surprise - i think we talked about this a few weeks ago
19:57:21 <devananda> 2 - i will be at europython conf next week. i haven't looked at the schedule so i'm not sure if anything will conflict with this meeting time
19:57:43 <anteaya> yay europython, are you speaking?
19:57:52 <devananda> NobodyCam: mind running things if i'm not able to make it?
19:58:01 <NobodyCam> not at all :)
19:58:08 <devananda> anteaya: not that i'm presently aware of
19:58:16 <anteaya> cool
19:58:22 <NobodyCam> booth duity!!!!
19:58:24 <anteaya> perhaps the hallway track
19:58:25 <devananda> NobodyCam: thanks :)
19:58:39 <devananda> both ^_^
19:58:47 * NobodyCam wants swag
19:59:13 <devananda> cool. times just about up -- thanks everyone!
19:59:21 <NobodyCam> good meeting :)
19:59:21 <romcheg> Thanks!
19:59:27 <anteaya> thank you
19:59:32 <devananda> #endmeeting