19:01:02 <NobodyCam> #startmeeting Ironic
19:01:03 <openstack> Meeting started Mon Nov 25 19:01:02 2013 UTC and is due to finish in 60 minutes.  The chair is NobodyCam. Information about MeetBot at http://wiki.debian.org/MeetBot.
19:01:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
19:01:07 <openstack> The meeting name has been set to 'ironic'
19:01:10 <devananda> o/
19:01:17 <NobodyCam> #chair devananda
19:01:18 <openstack> Current chairs: NobodyCam devananda
19:01:25 <NobodyCam> welcome all to the Ironc meeting
19:01:26 <GheRivero> o/
19:01:27 <romcheg> o/
19:01:29 <lucasagomes> hey
19:01:30 <yuriyz> hi
19:01:30 <shadower> hola
19:01:32 <dkehn> o/
19:01:35 <ctracey> hey
19:01:40 <NobodyCam> The agenda can ofc be found at:
19:01:41 <NobodyCam> #link https://wiki.openstack.org/wiki/Meetings/Ironic#Agenda_for_next_meeting
19:01:50 <NobodyCam> #toplic Greetings, roll-call and announcements
19:02:06 <NobodyCam> hey all :)
19:02:26 <devananda> #topic Greetings, roll-call, and announcements
19:02:29 <NobodyCam> ok any notes / announcemmeta
19:02:57 <NobodyCam> lol toplic dosen't work
19:03:02 <NobodyCam> hehw
19:03:52 <NobodyCam> #topic integration and testing
19:04:10 <NobodyCam> romcheg: any updates here
19:04:12 <lucasagomes> ahh no in-progress action items?!
19:04:22 <NobodyCam> hehehe
19:04:23 <romcheg> NobodyCam: There are some updates
19:04:46 <romcheg> Most of the "third party" patches are merged
19:04:56 <NobodyCam> w00t :)
19:05:01 <romcheg> s/most/all/g
19:05:26 <lucasagomes> just one thing, last week I said I would map some code/bugs to be ported from nova bm to ironic I started mapping them at https://etherpad.openstack.org/p/IronicWhiteBoard (at the bottom)
19:05:41 <romcheg> Two patches left: #link https://review.openstack.org/#/c/53917/ and #link https://review.openstack.org/#/c/48109/
19:05:58 <NobodyCam> great :)
19:06:19 <romcheg> The first one Adds the jobs to infra config. Apparently it won't land until clarkb finished his refactoring
19:06:32 <ctracey> lucasagomes, those are the ones listed at the bottom?
19:06:45 <lucasagomes> ctracey, yes
19:06:55 <ctracey> ok cool, thanks
19:07:01 <NobodyCam> I have a update on the nova driver
19:07:20 <romcheg> I plan to make the jobs non-voting to be able to safely add them to tempest in order to test 48109 properly
19:08:18 <NobodyCam> I put up a new nova-ironic dib element in my git hub. this element pulls patch set 3 of the ironic nova driver. https://github.com/NoBodyCam/ironic-element
19:08:23 <NobodyCam> #link https://github.com/NoBodyCam/ironic-element
19:09:18 <devananda> romcheg: let's start with non-gating for ironic, as you say, but plan to move them to be gating quickly
19:09:25 <NobodyCam> I made some changes to FakePower (that are not yet ready) tnat will allow fake driver to go thru a deploy
19:09:43 <devananda> romcheg: by "non-voting" you mean voting +1/-1, but not gating, I assume
19:09:53 <romcheg> devananda: yes
19:10:00 <devananda> romcheg: :)
19:10:01 <romcheg> I planned to make them voting
19:10:34 <devananda> romcheg: so yes, let's start with that, and once we see that it works (few days, maybe a week) move them to gating (for ironic only, ofc)
19:10:34 <romcheg> However it's not safe to ass voting Ironic jobs to infra
19:10:41 <romcheg> g/ass/add/g
19:10:44 <devananda> right
19:10:48 <NobodyCam> :)
19:11:10 <NobodyCam> romcheg: I would vote for non-voting as forst step
19:11:16 <NobodyCam> first
19:11:20 <devananda> NobodyCam: you mean non-gating
19:11:53 <NobodyCam> yes... but I think I have heard the term non-voting
19:11:58 <ctracey> we have used voting/non-gating on other projects, and that has worked well
19:12:06 <ctracey> fwiw
19:12:12 <NobodyCam> :)
19:12:13 <romcheg> devananda: It should be non-voting because it will -1 every patch to tempest if it doesn't work
19:12:30 <romcheg> *otherwise
19:12:49 <devananda> romcheg: it shouldn't be enabled in the general tempest pipeline
19:13:15 <romcheg> devananda: that's a good idea
19:13:33 <romcheg> I will put it to the experimental pipeline to tempest
19:13:54 <romcheg> Then I will be able to make in voting/gating from the beginning
19:13:57 <devananda> romcheg: isn't there a flag to enable/disable it?
19:14:16 <devananda> romcheg: that flag defaults to 0. that's fine. in the ironic pipeline, we'll flip that flag on
19:14:54 <devananda> yea. there is this: https://review.openstack.org/#/c/48109/6/tempest/config.py  default=False
19:15:22 <romcheg> devananda: Yes, it's desabled
19:15:46 <romcheg> But to test the tests in this patch we need to enable that flag
19:16:11 <romcheg> That's what the new jobs in 53917 are
19:16:13 <devananda> and this, https://review.openstack.org/#/c/53899/6/devstack-vm-gate-wrap.sh, which defaults DEVSTACK_GATE_IRONIC=0
19:16:15 <romcheg> for
19:16:45 <devananda> romcheg: exactly
19:16:47 <romcheg> The new jobs I introduces set DEVSTACK_GATE_IRONIC to 1
19:17:26 <romcheg> I'm going to add those jobs to the expetimental pipeline in tempest
19:17:41 <romcheg> That will allow us to test the API tests
19:17:45 <devananda> romcheg: AIUI, that will enable the tempest tests for ironic in the ironic check and gate pipelines only
19:18:26 <romcheg> It won't if I add it to the experimental pipeline
19:18:36 <devananda> i mean, the patches you have now will ^
19:19:11 <romcheg> They will do that for Iroic
19:19:20 <romcheg> But we need to do that for tempest
19:19:26 <devananda> let's not add something to tempest's pipe.
19:19:27 <devananda> why?
19:19:39 <romcheg> To test changes to Ironic tests
19:19:57 <devananda> oh. gotcha
19:20:08 <romcheg> There is a special pipeline for that
19:20:17 <devananda> cause you can't see the tempest test changes in ironic's test pipeline. of course.
19:20:27 <devananda> cool
19:20:54 <romcheg> It allows to run some jobs by posting "check experimental"
19:21:01 <NobodyCam> :) great romcheg want a action item?
19:21:11 <romcheg> I do!
19:21:12 <romcheg> ^)
19:21:26 <romcheg> Ok, I tool too much of the time now, sorry
19:21:41 <NobodyCam> #action enable ironic in experimental pipeline
19:21:59 <NobodyCam> #action ( romcheg ) enable ironic in experimental pipeline
19:22:04 <dkehn> NobodyCam: Neutron/nova interation discussion time?
19:22:22 <NobodyCam> dkehn: sure
19:22:36 <NobodyCam> #topic Nova / neutron
19:22:39 <dkehn> ok, 1st q: what level of support does Ironic need
19:22:56 <dkehn> one possibility full https://github.com/openstack/nova/blob/master/nova/network/api.py support
19:23:05 <dkehn> or not
19:23:28 <dkehn> cue crickets
19:23:45 * devananda skims api
19:23:46 <NobodyCam> sorry was looking at the link
19:24:10 <NobodyCam> I kinda think we'll need the full api.
19:24:20 <dkehn> I'm assigning that Nova will be nowhere
19:24:25 <devananda> dkehn: is that the n-net api or the neutron api (or are they the same)?
19:24:35 <dkehn> sma
19:24:37 <dkehn> same
19:24:39 <devananda> k
19:25:01 <dkehn> so basically it packages up a request and sends to the neutron API
19:25:08 <NobodyCam> we could prob do with out the dns functions
19:25:12 <devananda> dkehn: so, at least for now, we can assume nova is still in the picture
19:25:20 <dkehn> see the __init__.py where it gets  the API for nuetron
19:25:29 <dkehn> oh ok
19:25:40 <devananda> we're trying not to rewrite nova :)
19:25:47 <dkehn> that will changes things a bit then we could go that route
19:26:04 <devananda> dkehn: the interaction we need is that the PXE driver in particular needs to do certain things
19:26:05 <dkehn> instead of implementing the network (at least for neutron) in Ironic
19:26:11 <dkehn> sure
19:26:27 <devananda> dkehn: that nova doesn't know about. like manage the DHCP BOOT response, and associate an IP on a separate network for deployment, temporarily
19:27:07 <devananda> dkehn: so the neutron interaction is going to be coming from the PXE driver layer, not eg. the ironic ConductorManager layer
19:27:20 <NobodyCam> GheRivero: any in put on this ^^
19:27:44 <dkehn> ok, that seems like we haven't talked about that yet, still mulling it over
19:28:22 <dkehn> GheRivero: does your PXE drive handle something like this?
19:28:25 <GheRivero> looking at the nova-ironic driver, most of the network will be do still by nova. We (ironic) wwill mostly need a way to manage the dhcp_opts
19:28:49 <devananda> NobodyCam, lucasagomes, and lifeless and I talked last week about some different ways we can initiate (re)mapping of nova instance <-> ironic condcutor <-> neutron dhcp_opts
19:28:57 <dkehn> GheRivero: we talked about that previous;ly , in that putting it into the module/pxe.py
19:28:57 <devananda> what came out of that is this BP
19:28:59 <devananda> #link https://blueprints.launchpad.net/ironic/+spec/instance-mapping-by-consistent-hash
19:29:21 <dkehn> what is in the nova/virt/baremetal/pxe.py today
19:29:43 <NobodyCam> GheRivero: we will need to create and associate deploy networks and ips
19:30:02 <wh> Need to cover non-pxe driver as well
19:30:02 <lucasagomes> on the ironic white board etherpad there's also some notes related to consistent hashing
19:30:04 <lucasagomes> #link https://etherpad.openstack.org/p/IronicWhiteBoard
19:30:05 <devananda> tl;dr -- each Conductor will know what instances it is responsible for, and needs to set the neutron dhcp_opts for those instances to point to its own IP for TFTP
19:30:28 <devananda> wh: non-pxe drivers may or may not need neutron integration for DHCP BOOT
19:31:17 <dkehn> so in the context of the conductor it will know its netwrok that it needs the PXE boot setup for (tftp server, etc.)
19:31:57 <wh> Can neutron today asign a ip address for a bare-metal node with non-pxe driver?
19:32:07 <devananda> wh: put another way, only the PXE driver needs to manage PXE booting
19:32:43 <devananda> wh: today, the only other driver in nova-baremetal is Tilera. IIRC, it uses NFS + SSH for booting, not DHCP, so Neutron isn't involved at that layer at all
19:32:50 <dkehn> such that when is call neutron (or nova) it has enough knowledge for the request info
19:32:55 <devananda> wh: neutron still assigns the *tenant* IP, of course
19:33:13 <NobodyCam> dkehn: conductor will know that it is responsible for a node, and it will know its own ip
19:33:21 <wh> I am thinking about virtual media
19:33:23 <dkehn> k
19:33:51 <NobodyCam> pxe will have a block of deploy ips it can use (atp, deva am I saying that correctly)
19:34:34 <devananda> dkehn: there is some glue we are workig on - that BP attempts to explain it - so that, for any instance that Nova is trying to boot, the Ironic PXE driver will know what DHCP_OPTS need to be sent to Neutron for that instance's deploy to work
19:35:01 <devananda> dkehn: as far as the integration with neutron, i think we only need a way for the PXE driver to send that info to neutron directly
19:35:18 <dkehn> ok, so that into is going to be tuxked awayin the module/pxe.py, or should be
19:35:22 <devananda> dkehn: i /think/ we're going to remove that from Nova. NobodyCam, amirite?
19:35:36 <devananda> dkehn: right -- ironic/drivers/modules/pxe.py
19:35:38 <dkehn> right
19:36:38 <NobodyCam> running short on time anything elese here?
19:36:40 <dkehn> so we are back to the original question of integrating to neutron - there is an eample in nova - is that the right track? and what do we need - of course the plumbing will be required
19:36:57 <devananda> wh: same thing. Point is that a virtual media based deployment driver will not need to set special DHCP BOOT options in neutron. so it won't need this particular bit of integration with neutron, which is why we're removing it from the common code in nova/virt/drivers/ironic.py
19:37:04 <GheRivero> i think only a small fraction of the api will be needed
19:37:12 <devananda> GheRivero: ++
19:37:16 <dkehn> GheRivero: agreed
19:37:39 <GheRivero> - manage dhcp_options
19:37:45 <dkehn> I guess in the shortterm we ned the plumbing and then the API will be a living thing
19:38:12 <GheRivero> - and deployment ip?
19:38:19 <dkehn> plumbing logic would be small , neutron only?
19:38:22 <devananda> dkehn: i can help post-meeting go through the nova network API and figure out what we need
19:38:35 <dkehn> devananda: k
19:38:36 <GheRivero> if a conductor dies, i guess the deployment network also changes
19:38:52 <dkehn> devananda: note meeting day for me maybe later, later
19:39:10 <devananda> dkehn: ok. ping me or, better yet, send me a gcal invite for when you have time
19:39:20 <dkehn> awesome
19:39:27 <NobodyCam> great :)
19:39:32 <dkehn> me likey
19:39:39 <NobodyCam> action itme?
19:39:42 <GheRivero> :)
19:39:54 <devananda> #action dkehn and devananda to iron out what nova-network-like API bits are needed in ironic/drivers/modules/pxe.py
19:39:58 <dkehn> hit me baby
19:39:58 <NobodyCam> :)
19:40:16 <devananda> NobodyCam: moving on?
19:40:30 <NobodyCam> #topic python-ironicclient
19:40:54 <NobodyCam> just need to think of how the driver can start a deploy
19:41:03 <devananda> ahh right
19:41:22 <devananda> lucasagomes: we should probably create a similar API to how we change power state
19:41:36 <lucasagomes> yes, we need to offer something in the lib
19:41:46 <lucasagomes> a method call where he can do some node.deploy()
19:42:04 <NobodyCam> :) or set_deploy
19:42:10 <NobodyCam> *start_deploy
19:42:21 <lucasagomes> yea
19:42:35 <devananda> so we also need to think about other driver interfaces
19:42:42 <devananda> how do we initiate things like
19:42:51 <devananda> - tear down
19:42:55 <devananda> - start console
19:42:59 <devananda> - rescue
19:43:03 <devananda> - migrate
19:43:06 <devananda> etc
19:43:33 <NobodyCam> devananda: +++ console is needed for parody
19:43:38 <devananda> we will need deploy&teardown and start/stop_console for Icehouse
19:43:44 <NobodyCam> :)
19:43:45 <devananda> others are long term but worth keeping in mind
19:43:48 <lucasagomes> yes we need also to expose a way where we can validate such interfaces
19:43:57 <lucasagomes> before triggering the deploy
19:44:17 <NobodyCam> :) oh yes
19:44:34 <yuriyz> +1 lucas, your patch?
19:44:41 <NobodyCam> so some thing like validate_deploy & start_deploy
19:44:42 <yuriyz> for validation
19:44:44 <lucasagomes> I will do some work on that, I had a patch (which is now abandoned) that do it (kinda)
19:44:51 <lucasagomes> yuriyz, yes, I will revive that patch
19:45:18 <NobodyCam> lucasagomes: action item?
19:45:27 <lucasagomes> NobodyCam, please :)
19:45:36 <devananda> so we have now PUT /v1/node/XXX/state/power VALUE
19:46:11 <NobodyCam> #action lucasagomes look in to adding validate and start deploy methods to ironic Client
19:46:22 <devananda> i think that's sufficient for other states, eg, state/deploy and state/console
19:46:27 <NobodyCam> ok to move on?
19:46:33 <devananda> ya
19:46:45 <NobodyCam> #topic API discussion
19:47:03 <lucasagomes> ahn on the API side, I did some refactor in our resources
19:47:29 <lucasagomes> to use WSME custom types and complex types validation and remove a lot of the complexity we had before
19:47:36 <lucasagomes> patches are upstream waiting for review
19:47:40 <devananda> lucasagomes: i need to finish reviewing your WSME patch. got some done friday, but didn't finish
19:47:56 <lucasagomes> devananda, ah thanks :)
19:48:20 <yuriyz> lifeless -2 on https://review.openstack.org/56612
19:49:37 <lifeless> yuriyz: I did
19:50:04 <yuriyz> I add some comments
19:50:05 <NobodyCam> any comments on vender_passthru
19:50:06 <devananda> yuriyz: i also don't like exposing /vendor_passthru/ publicly without auth
19:50:07 <lifeless> yuriyz: oh, I see
19:50:20 <lifeless> yuriyz: sounds like a keystone bug :)
19:50:50 <lifeless> whatever the needs we have, it won't be unique. We should share it by doing it right, once - in keystone.
19:50:55 <lucasagomes> so if not using the kernel cmd, there's any other way we can use to pass it to the deploy ramdisk?
19:51:06 <devananda> lifeless: ++
19:51:16 <devananda> lucasagomes: there are no other _simple_ ways
19:51:16 <lifeless> lucasagomes: we could write it to a file in the tftp root and suck that up from the ramdisk
19:51:26 <yuriyz> DHCP param limited 255 bytes
19:51:38 <lifeless> it would be publically visible, but so is the kernel commandline. Its a race we know about.
19:51:51 <devananda> lucasagomes: there are some odd vendor specific ways that folks have proposed, which potentially would be more secure
19:52:08 <romcheg> yuriyz: Maybe we should to Keystone trusts instead of tokens?
19:52:39 <yuriyz> login/password credentials?
19:52:51 <romcheg> yuriyz: I think there is a chance we can use it to get a token inside a ramdisk
19:52:53 <lucasagomes> I see, maybe we should start a list at the ironic white board etherpad with the possible ways of doing this
19:53:10 <NobodyCam> lucasagomes: ++
19:53:18 <yuriyz> +
19:53:20 <romcheg> yuriyz: No, trust is a special thing in keystone that allows to get a new token
19:53:26 <GheRivero> ++
19:53:51 <lifeless> write it to a file in the tftp root
19:53:56 <devananda> so while I would like toimprove teh security of this API by using a better token, or a one-use-only token, or by transmitting the token securely
19:54:01 <lifeless> no better or worse than kernel cmd line andunlimited in size
19:54:25 <devananda> i dont want us to spend another week (or more) engineering a better solution to this that what Nova-baremetal has
19:55:01 <devananda> the current deploy_key (just a string) is sufficient for now
19:55:22 <devananda> yes, we should whiteboard a better solution for post-Icehouse
19:55:39 <devananda> but my objection to this patch is that it opens all of /vendor_passthru/ to unauthenticated access in the public API
19:55:46 <devananda> *public API service
19:55:57 <NobodyCam> devananda: so going with file in tft dir?
19:56:07 <devananda> NobodyCam: that is tangential to my objection
19:56:12 <lucasagomes> yea it's less than ideal
19:56:20 <devananda> how we get the key there != how we check the key in teh API when its posted back
19:56:42 <devananda> if we can stick a full keystone auth token in a file in the tftp dir, great -- that seems best.
19:56:53 <yuriyz> and maybe add new config option for allow private API?
19:56:54 <devananda> then we don't need to lower the auth requirement of /vendor_passthru
19:57:02 <devananda> if we CAN"T use a full keystone token
19:57:34 <NobodyCam> switching topic for last three minutes
19:57:36 <devananda> we need a CONF option to allow unprivileged access to /vendor_passthru, which admins will explicitly need to set up a separate API service for that is not on a public PI
19:57:41 <devananda> ack
19:57:44 <NobodyCam> #topic open discussion
19:57:57 <devananda> yuriyz: migration scripts!
19:57:58 <NobodyCam> any body? any thing?
19:58:09 <NobodyCam> ya missed that
19:58:10 <lucasagomes> there's the merge of migration scripts
19:58:13 <yuriyz> Ironic not in production
19:58:26 <NobodyCam> #link https://review.openstack.org/#/c/57722/
19:58:34 <yuriyz> +1 for modify old scripts now
19:58:44 <devananda> yuriyz: no, but if this is a problem, how will we solve this when we ARE in production if it comes up again?
19:58:47 <yuriyz> for bugfix
19:59:18 <yuriyz> sqlalchemy-migrate is very buggy
19:59:27 <devananda> yuriyz: and we can fix it
19:59:27 <lucasagomes> yuriyz, we can't just create a new migration that fix the UCs?
19:59:33 <romcheg> Let's switch to Alembic :)
19:59:45 <GheRivero> romcheg: +
19:59:46 <yuriyz> romcheg +1
19:59:51 <devananda> yuriyz: either make new migration that fixes things, or make patch to sqla-migrate ...
19:59:58 <devananda> romcheg: -1
20:00:18 <yuriyz> new migration with all _two_ UC for table
20:00:19 <romcheg> devananda: I still have one +1 :-P
20:01:00 <yuriyz> ok I write new one
20:01:01 <devananda> yuriyz: if you have to drop all indexes and recreate them to fix this, that's fine. still better than changing history
20:01:04 <NobodyCam> romcheg: too big of a change for ice house target
20:01:13 <yuriyz> ok
20:01:16 <devananda> ok, we're over time :)
20:01:29 <devananda> let's continue as necessary back in channel. Thanks everyone!!
20:01:30 <NobodyCam> no meeting after us now thou
20:01:34 <devananda> oh...
20:01:41 <romcheg> NobodyCam: I realize that. But after we're in production we won't be able to do that
20:01:46 <NobodyCam> but we can go back to chnnel
20:01:59 <lucasagomes> if we can continue the discussion here it would be better
20:02:05 <lucasagomes> because people look at the logs
20:02:07 <devananda> yea. calendar says no one's after us
20:02:08 <devananda> let's stay here
20:02:12 <NobodyCam> k
20:02:27 <devananda> romcheg: other projects have been discussing moving to alembic for several cycles
20:03:09 <devananda> romcheg: but instead, afaik, openstack is now maintaining sqla-migrate. other projects are still using it. the common code for migrations has been moved to oslo recently
20:03:13 <NobodyCam> devananda: what has been stopping the pthers
20:03:29 <devananda> romcheg: and in fact I'm porting that code from oslo to ironic :)
20:03:30 <dkehn> I can speak to that, its not trivial
20:03:31 <devananda> #link https://review.openstack.org/#/c/56516/6
20:03:51 <dkehn> but allows migrationing back and forth
20:04:10 <devananda> afaik, the plan is to get projects using the shared oslo db migration code
20:04:37 <romcheg> devananda: that is the perfect plan, if it really exists
20:05:15 <devananda> clarkb: any comment on slqa-migrate // alembic // oslo db.sqlalchemy.test_migrations ?
20:06:04 <clarkb> they should run in per test or per process db schemas and not worry about locking using shared resources
20:06:32 <devananda> hehe
20:06:47 <devananda> clarkb: context here is should-we-use-alembic-or-sqlamigrate
20:07:47 <clarkb> oh that, no input from me
20:07:51 <devananda> ack
20:07:54 <NobodyCam> :-p
20:08:00 <dkehn> love it
20:08:08 <NobodyCam> so stick with sqla and fix it?
20:08:42 <dkehn> is there any consensis from other projects
20:08:58 <dkehn> or is the same question being batted around
20:09:00 <clarkb> I think general consensus is move to alembic
20:09:22 <devananda> #action devananda to find out project's common plans for sqla-migrate vs alembic, why there are test_migration code in oslo, and whether we should be fixing bugs in sqla-migrate
20:09:35 <NobodyCam> ++ Ty devananda
20:10:14 <NobodyCam> any thing else??
20:11:05 <NobodyCam> ok we can handle any thing else in channel
20:11:55 <NobodyCam> Thank you every one. great meeting
20:12:18 <NobodyCam> #endmeeting