*** kobier has quit IRC | 00:07 | |
*** thedodd has quit IRC | 00:19 | |
*** matsuhashi has joined #openstack-ironic | 00:25 | |
*** r-mibu has joined #openstack-ironic | 00:30 | |
*** jbjohnso has joined #openstack-ironic | 00:34 | |
*** nosnos has joined #openstack-ironic | 00:57 | |
*** nosnos has quit IRC | 01:01 | |
*** nosnos has joined #openstack-ironic | 01:01 | |
*** hemna has quit IRC | 01:02 | |
*** rloo has quit IRC | 01:11 | |
openstackgerrit | Jarrod Johnson proposed a change to stackforge/pyghmi: Make pyghmi tolerate arbitrary threading models https://review.openstack.org/71174 | 01:20 |
---|---|---|
*** digambar has quit IRC | 01:26 | |
devananda | lifeless: what's your opinion on futures? | 01:30 |
lifeless | devananda: the concept in general - fine. The python library specifically, I haven't used. | 01:35 |
devananda | lifeless: ack, thanks | 01:38 |
lifeless | devananda: I'm curious what Ironic needs async scatter-gather for ? | 01:40 |
lifeless | devananda: remembering that with physical IO involved, nearly any concurrency is a pessimisation | 01:40 |
devananda | yes | 01:40 |
devananda | a layer above that | 01:40 |
lifeless | devananda: e.g. I'm seriously wondering how to knobble nova-compute to stop doing concurrent image downloads from glance right now. | 01:40 |
lifeless | cause its the suck | 01:40 |
devananda | knobble... nice word | 01:41 |
devananda | lifeless: take a look at the ironic pxe driver. IIRC, GheRivero already added that there | 01:41 |
lifeless | devananda: added concurrency or added single-op-at-a-time? | 01:42 |
* devananda double checks the code | 01:42 | |
lifeless | devananda: anyhow, whats ironic considering futures for? I recall a review that mentioned it | 01:42 |
devananda | api<->conductor tier | 01:42 |
lifeless | ah | 01:42 |
lifeless | yeah, ok that could make sense | 01:43 |
devananda | when an RPC request is received by the conductor, take the TaskManager lock, start off a future, then fire back the "OK, i started it" | 01:43 |
lifeless | huh, wait | 01:43 |
lifeless | where's the async in that | 01:43 |
devananda | the requested action may take a long time | 01:43 |
lifeless | yes... | 01:43 |
devananda | user gets answer once the job is successfully /started/ | 01:43 |
devananda | or user gets an error if it can't be started | 01:44 |
devananda | that much is sync | 01:44 |
devananda | the job completes async | 01:44 |
lifeless | I thought there was a worker pool already associated with the rpc layer | 01:44 |
lifeless | so you're going to user a worker pool from the rpc worker threads? | 01:44 |
devananda | worker pool? | 01:44 |
lifeless | yeah | 01:44 |
devananda | this isn't gearman | 01:44 |
devananda | ? | 01:44 |
devananda | sorry, greenthread | 01:45 |
devananda | lifeless: call'able methods return a message to the caller when the invoked method exits. cast'able methods don't return anything | 01:47 |
lifeless | devananda: I'm confused | 01:48 |
lifeless | devananda: I know the cast vs call difference | 01:48 |
lifeless | devananda: I was fairly sure in nova that there wasn't a single RPC service thread for the conductor (to use an example), rather there is a [green]thread pool | 01:48 |
devananda | ahh | 01:48 |
lifeless | so message from amqp -> worker thread in the thread pool | 01:49 |
devananda | gotcha, yes | 01:49 |
lifeless | and what you're saying is that that worker thread is now going to use *another* thread pool. | 01:49 |
lifeless | either I'm confused, or this is bonkers. | 01:49 |
devananda | hah | 01:49 |
devananda | it may be bonkers. we may need to solve it slightly differently. | 01:50 |
lifeless | In Python, more concurrency is not the answer to the problem, *whatever* the problem is. | 01:50 |
lifeless | devananda: so what is the problem ? | 01:50 |
lifeless | devananda: and, am I confused ? | 01:50 |
lifeless | devananda: 'rpc_thread_pool_size' | 01:51 |
devananda | the goal is being able to validate a request and start an action, then inform the user that the action was started | 01:51 |
lifeless | devananda: defaults to 64 - in Ironic /openstack/common/rpc/__init__.py | 01:51 |
lifeless | devananda: so, I don't think I'm confused. | 01:51 |
lifeless | devananda: sketching to check I understand | 01:52 |
lifeless | https://etherpad.openstack.org/p/action-started | 01:52 |
lifeless | devananda: ok, so I understand it now | 01:54 |
lifeless | devananda: let me ask, how will the user tell the thing failed if it fails after start ? | 01:54 |
devananda | editing pad too | 01:54 |
lifeless | ok, so if the user is going to poll | 01:55 |
lifeless | why do this at all? | 01:55 |
lifeless | [I see some complex failure modes that avoiding entirely would be nice] | 01:55 |
devananda | lifeless: so there are a few ways to avoid this, most of which seem to fail much worse | 01:56 |
devananda | lifeless: one is that the API does some very light weight checking, response 202-ACCEPTED to user, and *then* sends the message to conductor | 01:56 |
devananda | lifeless: with no guarantee that it can actually do the work requested | 01:56 |
lifeless | I'll let you enumerate, then we can address point by point | 01:57 |
lifeless | don't want to rabbit hole early | 01:57 |
devananda | k. i'll put in pad so we have references :) | 01:57 |
*** kobier has joined #openstack-ironic | 01:59 | |
lifeless | my laptop being hammered, switching to desktop for a sec | 02:08 |
devananda | hah. i was suddenly wondering who had joined :) | 02:09 |
lifeless | my 2G VM is deploying 3 x 4G images... | 02:11 |
lifeless | THRASH IT BABY | 02:12 |
lifeless | oom killer killed rabbit before | 02:12 |
lifeless | that made deployments unhappy :P | 02:12 |
*** coolsvap_away has quit IRC | 02:34 | |
devananda | right | 02:55 |
devananda | max_lobur_afk: something for you & lucas to look over when ya'll wake up: https://etherpad.openstack.org/p/action-started | 03:30 |
*** harlowja is now known as harlowja_away | 04:03 | |
*** killer_prince has joined #openstack-ironic | 04:19 | |
*** killer_prince has quit IRC | 04:31 | |
devananda | lifeless: message for when you're back (though i'll probably be gone by then) -- what if we limit the futures' threads like we do rpc threadpool? does that affect your preference? | 04:34 |
*** killer_prince has joined #openstack-ironic | 04:37 | |
mrda | devananda: Just a question on bug/1271291, when the bug report suggests the methods should be consolidated - are you talking about just implementation, or the public signatures as well? i.e. are the abstract methods in db/api.py up for change, or only the implementation in db/sqlalchemy/api.py ? | 04:37 |
devananda | oh | 04:38 |
devananda | both | 04:38 |
devananda | but | 04:38 |
devananda | i thought i saw that change in a recent patch? | 04:38 |
* mrda goes looking | 04:38 | |
devananda | mrda: https://review.openstack.org/#/c/63937/ | 04:39 |
devananda | tagged with a different bug id though ... | 04:39 |
*** jbjohnso has quit IRC | 04:41 | |
mrda | devananda: it's kind of half the job, because it ignores get_node, and get_node_by_instance. | 04:42 |
mrda | I'll review the patch and request the bug be added | 04:42 |
mrda | in the patch | 04:42 |
mrda | rats | 04:43 |
devananda | mrda: one i just filed today -- a bit trickier though :) https://bugs.launchpad.net/ironic/+bug/1276393 | 04:45 |
devananda | if you're hunting for bugs to take on, there are some others | 04:46 |
*** shausy has joined #openstack-ironic | 04:47 | |
mrda | Sure! I'm happy to take any that won't be too hard :) | 04:47 |
mrda | devananda: Feel free to send any others my way | 04:51 |
devananda | mrda: I just targeted a bunch of unassigned bugs to i3 ... https://launchpad.net/ironic/+milestone/icehouse-3 | 04:52 |
devananda | let's see how much I regret that when I have to postpone a bunch of them later :) | 04:53 |
devananda | but feel free to take anything that's not assigned | 04:53 |
devananda | anyhow, it's almost 9pm and I really should eat some dinner :) | 04:54 |
* devananda really goes afk now | 04:54 | |
mrda | thanks devananda, have a good night | 04:55 |
*** jcooley_ has quit IRC | 05:19 | |
*** killer_prince has quit IRC | 05:20 | |
*** igor__ has joined #openstack-ironic | 05:37 | |
*** killer_prince has joined #openstack-ironic | 05:40 | |
*** jcooley_ has joined #openstack-ironic | 05:48 | |
*** shawal has joined #openstack-ironic | 05:50 | |
*** killer_prince has quit IRC | 06:03 | |
*** killer_prince has joined #openstack-ironic | 06:04 | |
*** killer_prince has quit IRC | 06:05 | |
lifeless | devananda: back | 06:05 |
*** killer_prince has joined #openstack-ironic | 06:05 | |
*** killer_prince has quit IRC | 06:05 | |
openstackgerrit | Jenkins proposed a change to openstack/ironic: Imported Translations from Transifex https://review.openstack.org/71192 | 06:06 |
*** killer_prince has joined #openstack-ironic | 06:08 | |
*** igor__ has quit IRC | 06:21 | |
*** mrda is now known as mrda_away | 06:44 | |
*** coolsvap has joined #openstack-ironic | 07:09 | |
*** jcooley_ has quit IRC | 07:12 | |
*** jcooley_ has joined #openstack-ironic | 07:12 | |
*** jcooley_ has quit IRC | 07:17 | |
*** shausy has quit IRC | 07:42 | |
*** shausy has joined #openstack-ironic | 07:43 | |
devananda | lifeless: so i should be asleep, but i'm not. i probably don't have braincells for much discussion, but after you went afk i looked closer at futures. | 07:46 |
devananda | lifeless: if implemented differently than inthe current patch, it should be possible to limit teh # of threads. with that, what concerns do you have? | 07:46 |
devananda | s/do/would/ | 07:46 |
lifeless | devananda: as a short term fix - fine; longer term the interaction with housekeeping seems necessary to fix regardless | 07:57 |
lifeless | devananda: and - honestly - that seems simple to fix right now - don't use a lock for background updates - just run one and only one, and use a rate limit on the node to ensure both concurrency and rate limit thresholds are kept under | 07:58 |
lifeless | devananda: that said, I don't see what futures gains you - its api is for map-reduce abstractions, which isn't the problem you ahve | 07:59 |
devananda | lifeless: it provides a way to return a message on the RPC bus (work started) while the work continues to run in another thread | 08:03 |
lifeless | devananda: so do threads :) | 08:03 |
lifeless | devananda: and thread pools | 08:03 |
devananda | lifeless: um. that's what this is, no? | 08:03 |
devananda | maybe i misunderstood something | 08:04 |
lifeless | devananda: its a backport of a python 3.2+ stdlib module that abstracts out threads vs external processes | 08:04 |
lifeless | http://pythonhosted.org//futures/#executor-objects | 08:04 |
lifeless | its intended to be used from synchronous code that wants to spawn a bunch of async workers. | 08:05 |
lifeless | e.g. map-reduce | 08:05 |
*** mdurnosvistov_ has joined #openstack-ironic | 08:05 | |
*** shausy has quit IRC | 08:05 | |
lifeless | its a lovely thing, but /not/ the problem you have, since you want your synchronous code to return, not to block. | 08:05 |
*** shausy has joined #openstack-ironic | 08:06 | |
devananda | lifeless: also, not taking a lock for background/housekeeping things might reduce the likelyhood of tripping on this today, but doesn't solve the races | 08:06 |
devananda | right | 08:06 |
lifeless | devananda: if it gives breathing room to do the long term solution, I think thats fine. | 08:06 |
devananda | so i want the synchronous code to spawn one and exactly one async worker | 08:06 |
lifeless | devananda: since both the defects we identified in the discussion need to be fixed. | 08:06 |
lifeless | [eventually] | 08:06 |
devananda | yea | 08:07 |
devananda | so non-locking-housekeeping addresses the "nova boot failed because of periodic task" | 08:07 |
devananda | but doesn't address any of the races between clients | 08:07 |
lifeless | if the sync code spawns one and only one worker, then when the sync code runs another rpc call | 08:07 |
lifeless | either it errors [there is a worker], or it blocks [your DOS concern], or it queues [ queueing] | 08:08 |
devananda | or improve the client's visibility into "was my request actioned?" | 08:08 |
*** jistr has joined #openstack-ironic | 08:08 | |
lifeless | I think you would be better served by saying 'given a thread pool serving RPCs, I want a thread pool of the same exact size serving RPC requested actions | 08:08 |
lifeless | and when that thread pool is full, start erroring incoming requests. | 08:08 |
devananda | sorry - i mean, for each incoming request, the sync code should start one async worker then synchronously return | 08:08 |
devananda | there may be a pool of N async workers that it can take one from | 08:09 |
lifeless | right | 08:09 |
devananda | right! | 08:09 |
devananda | that's what I meant | 08:09 |
devananda | E_LATE :) | 08:09 |
lifeless | standard eventlet thread pools will do that for you I think | 08:09 |
devananda | ...? | 08:09 |
devananda | hmm, ok | 08:09 |
lifeless | (though I still favour being -much- simpler about this :)) - doc ref for thread pool- http://eventlet.net/doc/threading.html#tpool-simple-thread-pool | 08:10 |
lifeless | tis what oslo.messaging uses | 08:10 |
devananda | can i create two separate pools of pools? | 08:10 |
devananda | er, pools of threadpools? or waht ever | 08:10 |
lifeless | yeah | 08:10 |
lifeless | these are real threads not greenthread, though I don't think that will make any odds for your code | 08:11 |
lifeless | o, maybe not multiple pools available - but if not thst will bite oslo eventually too :P | 08:11 |
lifeless | sigh, I hate this, someday will read code without eyes bleeding | 08:12 |
devananda | hah | 08:12 |
lifeless | anyhow | 08:12 |
lifeless | you should sleep on it | 08:12 |
lifeless | see if my arguments for doing a 'hack' now are sufficient ;) | 08:13 |
lifeless | just checked | 08:13 |
lifeless | you can instantiate a new Pool | 08:13 |
lifeless | proably use http://eventlet.net/doc/modules/greenpool.html | 08:13 |
devananda | yea, reading tha tpage already | 08:13 |
lifeless | spawn blocks when the worker thing is full, which I think is fine - its still a signal to the user that something is wrong | 08:14 |
devananda | looks like just instantiating a GreenPool within conductorManager.start(), with same number of worker threads, should do the trick | 08:14 |
lifeless | brilliant - no, fine - yes. | 08:14 |
devananda | yep | 08:14 |
lifeless | again, we should focus right now on good enough, since there is a deadline, and once past it sic redundant russions on the request id thing | 08:15 |
devananda | ya | 08:15 |
devananda | scheduler//taskflow//thing is needed | 08:15 |
devananda | but not this cycle | 08:15 |
lifeless | mmm, not a scheduler, nor taskflow | 08:16 |
devananda | thing :) | 08:16 |
lifeless | thing :) | 08:16 |
devananda | let's just call it Thing | 08:16 |
lifeless | WARNING nova.openstack.common.loopingcall [-] task run outlasted interval by 50.013854 sec | 08:16 |
lifeless | noice | 08:16 |
lifeless | 2014-02-05 07:55:05.979 8284 WARNING oslo.messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 91c001fd98a34e3489204b819cb1ae51, message : {u'_unique_id': u'734e2577dd3b472e852d73e09a7581e5', u'failure': | 08:16 |
lifeless | poor little node, overworked. | 08:16 |
devananda | heh | 08:17 |
lifeless | trying to spawn a 3 node overcloud | 08:17 |
*** romcheg has joined #openstack-ironic | 08:29 | |
*** ifarkas has joined #openstack-ironic | 08:40 | |
*** ifarkas_ has joined #openstack-ironic | 08:42 | |
*** ifarkas has quit IRC | 08:44 | |
*** lsmola has quit IRC | 08:46 | |
*** mdurnosvistov_ has quit IRC | 08:52 | |
*** matsuhashi has quit IRC | 08:54 | |
*** matsuhashi has joined #openstack-ironic | 08:55 | |
*** romcheg1 has joined #openstack-ironic | 08:57 | |
*** romcheg has quit IRC | 08:58 | |
*** aignatov_ is now known as aignatov | 08:59 | |
*** lsmola has joined #openstack-ironic | 08:59 | |
*** aignatov is now known as aignatov_ | 09:02 | |
*** derekh has joined #openstack-ironic | 09:05 | |
*** romcheg has joined #openstack-ironic | 09:06 | |
*** romcheg1 has quit IRC | 09:08 | |
*** aignatov_ is now known as aignatov | 09:10 | |
*** lucasagomes has joined #openstack-ironic | 09:16 | |
*** lsmola has quit IRC | 09:19 | |
*** lsmola has joined #openstack-ironic | 09:32 | |
openstackgerrit | Yuriy Zveryanskyy proposed a change to openstack/ironic: Expose 'reservation' field of a node via API https://review.openstack.org/71211 | 09:37 |
openstackgerrit | Yuriy Zveryanskyy proposed a change to openstack/ironic: Add ability to break TaskManager locks via REST API https://review.openstack.org/71212 | 09:38 |
openstackgerrit | Yuriy Zveryanskyy proposed a change to openstack/ironic: wip0 https://review.openstack.org/71211 | 09:38 |
openstackgerrit | Yuriy Zveryanskyy proposed a change to openstack/ironic: Expose 'reservation' field of a node via API https://review.openstack.org/71211 | 09:44 |
openstackgerrit | Yuriy Zveryanskyy proposed a change to openstack/ironic: Add ability to break TaskManager locks via REST API https://review.openstack.org/71212 | 09:44 |
*** tatyana has joined #openstack-ironic | 09:58 | |
*** max_lobur_afk is now known as max_lobur | 10:04 | |
lucasagomes | max_lobur, romcheg congratz for the nomination, you guys deserve it :) | 10:20 |
openstackgerrit | A change was merged to openstack/ironic: Move test__get_nodes_mac_addresses https://review.openstack.org/70481 | 10:20 |
romcheg | Morning lucasagomes, max_lobur | 10:20 |
romcheg | Thanks! | 10:20 |
lucasagomes | morning | 10:20 |
romcheg | I appreciate that! | 10:20 |
max_lobur | morning lucasagomes and romcheg | 10:21 |
max_lobur | thanks Lucas | 10:21 |
max_lobur | Thanks for your confidence and support folks! | 10:22 |
lucasagomes | :) | 10:23 |
romcheg | I join Max's thanks :) | 10:23 |
*** romcheg has left #openstack-ironic | 10:24 | |
*** rsacharya has joined #openstack-ironic | 10:24 | |
*** aignatov is now known as aignatov_ | 10:25 | |
*** rsacharya has quit IRC | 10:26 | |
*** rsacharya has joined #openstack-ironic | 10:28 | |
*** rsacharya has quit IRC | 10:36 | |
*** rsacharya has joined #openstack-ironic | 10:36 | |
*** rsacharya has joined #openstack-ironic | 10:37 | |
openstackgerrit | Lucas Alvares Gomes proposed a change to openstack/ironic: API tests to check for the return codes https://review.openstack.org/70766 | 10:39 |
*** jistr has quit IRC | 10:40 | |
*** shawal has quit IRC | 10:40 | |
*** shawal has joined #openstack-ironic | 10:41 | |
*** martyntaylor has joined #openstack-ironic | 10:42 | |
*** walsha has joined #openstack-ironic | 10:45 | |
*** shawal has quit IRC | 10:47 | |
*** nosnos has quit IRC | 10:53 | |
Haomeng | morning Ironic:) | 10:54 |
Haomeng | max_lobur, romcheg congrats:) | 10:54 |
Haomeng | I am back from our Chinese new year:) | 10:55 |
*** romcheg has joined #openstack-ironic | 10:56 | |
*** jistr has joined #openstack-ironic | 11:00 | |
max_lobur | morning Haomeng :) | 11:00 |
max_lobur | thanks! | 11:00 |
openstackgerrit | Lucas Alvares Gomes proposed a change to openstack/ironic: API tests to check for the return codes https://review.openstack.org/70766 | 11:00 |
Haomeng | max_lobur: morning:) | 11:00 |
Haomeng | max_lobur: :) | 11:00 |
Haomeng | max_lobur: I learn a lot from your code review comments, thanks you:) | 11:01 |
max_lobur | :) | 11:01 |
Haomeng | max_lobur: :) | 11:01 |
openstackgerrit | Lucas Alvares Gomes proposed a change to openstack/ironic: Handle multiple exceptions raised by jsonpatch https://review.openstack.org/68457 | 11:02 |
lucasagomes | Haomeng, morning, welcome back | 11:02 |
max_lobur | Haomeng, how's your holidays? :) | 11:02 |
Haomeng | lucasagomes: thanks:) | 11:03 |
Haomeng | max_lobur: very busy holidays to visit all friends:) | 11:04 |
max_lobur | I've seen a few TV reports from Chinese сelebration , seems it was awesome :) | 11:04 |
max_lobur | Haomeng, heh, same here :) | 11:04 |
Haomeng | max_lobur: yes, Chinese New year is special, all the people which will be back to home to celebration the new year:) | 11:05 |
max_lobur | cool :) | 11:08 |
Haomeng | max_lobur: so the traffic is terrible:) | 11:08 |
max_lobur | I know how it is :) | 11:08 |
Haomeng | max_lobur: :) | 11:09 |
max_lobur | we also have a lot of people working in big cities, and they they all trying to get home for NY :) | 11:09 |
Haomeng | max_lobur: yes, same case:) | 11:09 |
*** dshulyak has joined #openstack-ironic | 11:10 | |
max_lobur | btw, the winter olympics will start this saturday in Sochi, it should be a great show as well | 11:11 |
Haomeng | max_lobur: great:) | 11:11 |
max_lobur | wondering what Russians will invent to surprise the world :) | 11:11 |
Haomeng | max_lobur: ) | 11:11 |
*** rsacharya has quit IRC | 11:13 | |
*** aignatov_ is now known as aignatov | 11:14 | |
*** Alexei_987 has joined #openstack-ironic | 11:25 | |
openstackgerrit | Lucas Alvares Gomes proposed a change to openstack/python-ironicclient: Add node.states() to the client library https://review.openstack.org/70979 | 11:37 |
*** shausy has quit IRC | 11:42 | |
*** shausy has joined #openstack-ironic | 11:43 | |
*** lsmola has quit IRC | 11:46 | |
*** shausy has quit IRC | 11:55 | |
*** shausy has joined #openstack-ironic | 11:56 | |
*** shausy has quit IRC | 11:56 | |
*** igor_ has joined #openstack-ironic | 11:57 | |
*** shausy has joined #openstack-ironic | 11:57 | |
*** lsmola has joined #openstack-ironic | 12:01 | |
*** kobier has quit IRC | 12:30 | |
*** coolsvap has quit IRC | 12:33 | |
*** shausy has quit IRC | 12:35 | |
*** shausy has joined #openstack-ironic | 12:36 | |
*** lucasagomes is now known as lucas-hungry | 12:43 | |
*** lsmola has quit IRC | 12:53 | |
*** lsmola has joined #openstack-ironic | 12:56 | |
openstackgerrit | Max Lobur proposed a change to openstack/ironic: Allow concurrent image downloads in pxe logic https://review.openstack.org/63904 | 13:19 |
*** igor_ has quit IRC | 13:27 | |
*** igor_ has joined #openstack-ironic | 13:28 | |
*** jdob has joined #openstack-ironic | 13:30 | |
*** igor_ has quit IRC | 13:33 | |
*** walsha has quit IRC | 13:34 | |
openstackgerrit | Yuriy Zveryanskyy proposed a change to openstack/ironic: Add parameter for filtering nodes by maintenance mode https://review.openstack.org/63937 | 13:37 |
*** lucas-hungry is now known as lucasagomes | 13:48 | |
*** zigo_ is now known as zigo | 13:50 | |
*** igor_ has joined #openstack-ironic | 13:58 | |
*** igor__ has joined #openstack-ironic | 14:00 | |
*** shausy has quit IRC | 14:01 | |
*** igor_ has quit IRC | 14:03 | |
*** rloo has joined #openstack-ironic | 14:09 | |
*** medberry is now known as med_ | 14:14 | |
*** jbjohnso has joined #openstack-ironic | 14:17 | |
openstackgerrit | Yuriy Zveryanskyy proposed a change to openstack/ironic: Fix 'run_as_root' parameter check in utils https://review.openstack.org/70324 | 14:22 |
*** matty_dubs|gone is now known as matty_dubs | 14:29 | |
NobodyCam | good mornic Ironig says the man with out coffee | 14:39 |
lucasagomes | NobodyCam, morning | 14:42 |
NobodyCam | :) | 14:45 |
NobodyCam | morning lucasagomes | 14:45 |
max_lobur | morning NobodyCam :) | 14:45 |
NobodyCam | morning max_lobur :) | 14:45 |
romcheg | Morning NobodyCam | 14:45 |
NobodyCam | hey hey romcheg :) good morning... a | 14:46 |
NobodyCam | s/a// | 14:46 |
NobodyCam | :-p | 14:46 |
NobodyCam | I've got a conf call at 7 am pst to see if we can get some resources to help us out! | 14:48 |
NobodyCam | romcheg: your gmt +2? | 14:48 |
romcheg | NobodyCam: yes, GMT+2 | 14:48 |
NobodyCam | awesome :) | 14:48 |
*** aignatov is now known as aignatov_ | 14:49 | |
NobodyCam | anyone seen linaggo online last few days? | 14:50 |
*** rloo has quit IRC | 14:53 | |
*** aignatov_ is now known as aignatov | 14:53 | |
*** rloo has joined #openstack-ironic | 14:53 | |
romcheg | No, haven't seen her | 14:55 |
romcheg | NobodyCam: Please keep in mind I have a Spanish class from 6PM to 8 PM my time | 14:56 |
lucasagomes | NobodyCam, haven't seem her/him | 14:57 |
devananda | morning, all | 14:57 |
lucasagomes | NobodyCam, maybe it's because of the chinese new year | 14:57 |
lucasagomes | devananda, morning | 14:57 |
devananda | max_lobur, lucasagomes - i'm fairly sure that, once i have some coffee in me, i'll be able to explain https://etherpad.openstack.org/p/action-started | 14:58 |
devananda | and another solution for the race that doesn't require futures // expose a DOS attack on the conductor | 14:58 |
NobodyCam | ahh lucasagomes yes | 14:59 |
* lucasagomes clicks | 14:59 | |
max_lobur | morning devananda | 14:59 |
NobodyCam | will do romcheg :) | 14:59 |
lucasagomes | devananda, wait, that etherpad is about the race condition? | 15:00 |
devananda | yes | 15:00 |
devananda | heh | 15:00 |
devananda | lifeless and I were at it for a while ... | 15:00 |
NobodyCam | morning devananda | 15:00 |
max_lobur | :) | 15:00 |
lucasagomes | devananda, awesome, ok I gotta read it | 15:00 |
devananda | NobodyCam: conf code? | 15:00 |
max_lobur | I'm almost at the end of those doc :) | 15:00 |
lucasagomes | NobodyCam, btw I'm adding a bunch of tests to the driver, also tested the patch-set #16 (w/ volume driver) | 15:00 |
lucasagomes | it works | 15:00 |
* lucasagomes having a bad time with the tests, I suck on mocking stuff | 15:01 | |
NobodyCam | lucasagomes: awesome | 15:01 |
NobodyCam | :) | 15:01 |
devananda | max_lobur: tldr - use greenthreads, not futures. my comment on the review is otherwise still correct | 15:02 |
NobodyCam | http://www.youtube.com/watch?v=fGGWrJp4JHA&feature=kp | 15:02 |
devananda | max_lobur: i'll have a mock up for you in a bit | 15:02 |
max_lobur | yea, I've seen it | 15:02 |
max_lobur | I got general concept | 15:02 |
devananda | k | 15:02 |
devananda | max_lobur: while testing this out | 15:03 |
devananda | granted, it was very late last night | 15:03 |
devananda | but I think i discovered a bug in task_manager | 15:03 |
devananda | they aren't actually locking!! :( | 15:03 |
devananda | cause i can spin up >1 thread working on the same node at the same time | 15:04 |
lucasagomes | hmm | 15:04 |
openstackgerrit | Devananda van der Veen proposed a change to openstack/ironic: MOCK of a greenthread pool https://review.openstack.org/71281 | 15:04 |
devananda | that ^ | 15:05 |
*** bashok has joined #openstack-ironic | 15:07 | |
openstackgerrit | Yuriy Zveryanskyy proposed a change to openstack/ironic: Add ability to break TaskManager locks via REST API https://review.openstack.org/71212 | 15:08 |
*** rsacharya has joined #openstack-ironic | 15:12 | |
max_lobur | devananda, have you found a bug in my race patch, or in your's greenpool mockup? | 15:15 |
devananda | max_lobur: I *think* it's a problem in neither | 15:16 |
max_lobur | I just posted a comment to the mockup https://review.openstack.org/#/c/71281 | 15:16 |
devananda | max_lobur: well. there's a problem with your futures patch. but there's ALSO a problem with the task_manager locks | 15:17 |
*** matsuhashi has quit IRC | 15:17 | |
max_lobur | also please take a look https://review.openstack.org/#/c/69135/1/ironic/tests/conductor/test_manager.py | 15:17 |
max_lobur | L285 - 287 | 15:18 |
*** matsuhashi has joined #openstack-ironic | 15:18 | |
*** aignatov is now known as aignatov_ | 15:28 | |
*** coolsvap has joined #openstack-ironic | 15:29 | |
max_lobur | devananda, lucasagomes I have to go for now, I'll join IRC in a ~4 hours | 15:32 |
lucasagomes | ack | 15:32 |
* lucasagomes is in a call | 15:32 | |
devananda | max_lobur: Cannot call refresh on orphaned Node object | 15:33 |
*** matsuhashi has quit IRC | 15:34 | |
max_lobur | devananda, do you mean the test is broken / does nothing ? | 15:37 |
openstackgerrit | Ghe Rivero proposed a change to openstack/ironic: Allow to tear-down a node waiting to be deployed https://review.openstack.org/71297 | 15:37 |
devananda | max_lobur: when i run my patch, adding a node.refresh() this is what i get in the greenthread | 15:38 |
max_lobur | ahh | 15:38 |
devananda | max_lobur: i believe your test isn't erally testing node.refresh() or a separate thread | 15:38 |
max_lobur | i'm using db_node.refresh(self.context) | 15:38 |
max_lobur | you're getting OrphanedObjectError | 15:38 |
max_lobur | https://github.com/openstack/ironic/blob/2ce7c44bd16fb65719e554bd66ff775a2e8c6332/ironic/objects/base.py#L116 | 15:39 |
max_lobur | the only place where it raised | 15:39 |
max_lobur | this means the session stored within the object is not valid | 15:39 |
devananda | max_lobur: ah, right. it's early - i'm still making sleepy mistakes | 15:39 |
max_lobur | therefore we usually pass session to refresh | 15:40 |
max_lobur | as arg | 15:40 |
devananda | max_lobur: so if i explicitly pass session to refresh, i get a different one now | 15:40 |
max_lobur | :) | 15:40 |
devananda | DBError: can't compare offset-naive and offset-aware datetimes | 15:40 |
max_lobur | heh | 15:40 |
max_lobur | haven't seen that | 15:40 |
devananda | coming from session.commit | 15:40 |
devananda | in the greenthread | 15:40 |
devananda | max_lobur: anyway, my poorly-written mockup aside, the point of that long etherpad is that, long term, we should indeed be returning a tracking token for every user request | 15:41 |
max_lobur | devananda, have you seen my comment https://review.openstack.org/#/c/71281/1 | 15:42 |
max_lobur | to your patch | 15:42 |
max_lobur | because it won't work as it is | 15:42 |
devananda | and using a third (or external) service to store those requests... but short term | 15:42 |
devananda | short term we need a thread pool with limited # of workers, not spawning a new thread for every request with no upper bound | 15:43 |
max_lobur | devananda, I agree | 15:43 |
devananda | max_lobur: gah! yes - you're right. it was after midnight when I was testing that. | 15:43 |
max_lobur | I'll take a look if we can create a pool using futures | 15:44 |
devananda | lemme grab the manual locking from your patch. that explains why my test isn't working :) | 15:44 |
max_lobur | honestly I like futures API more than greenthead | 15:44 |
devananda | max_lobur: futures, afaict, doesn't support this. why not use eventlet.greenpool | 15:44 |
max_lobur | at least the have done_callback that we need so much | 15:44 |
max_lobur | but | 15:44 |
max_lobur | we can have a pool of submitted futures | 15:44 |
max_lobur | we will have it anyway | 15:44 |
max_lobur | because we want to send signal to cancel them | 15:45 |
devananda | we alerady pass the task object in to the job, so the jobs can release the task when they finish | 15:45 |
devananda | rather than needing a separate wrapper that releases the task | 15:45 |
max_lobur | well | 15:45 |
devananda | hm | 15:45 |
devananda | edit edit edit :) | 15:46 |
devananda | we need a wrapper to handle exception cases | 15:46 |
max_lobur | in that way we'll need to check for exception | 15:46 |
max_lobur | yep | 15:46 |
devananda | but still, i dont see why that doesn't work with greenthread api | 15:46 |
max_lobur | because those point of task that releases lock may be not hit at all | 15:46 |
*** rsacharya has quit IRC | 15:46 | |
max_lobur | done_callback guarantees that it will be hit | 15:47 |
devananda | try: finally: | 15:47 |
max_lobur | well, maybe, I need to check | 15:48 |
max_lobur | k, I really gotta go, will back in ~ 4 hrs | 15:48 |
max_lobur | see ya! | 15:48 |
*** max_lobur is now known as max_lobur_afk | 15:49 | |
NobodyCam | devananda: lucasagomes and /me have +2'd https://review.openstack.org/#/c/71211/ I did not approve just so you could take a quick pick | 15:52 |
* devananda looks | 15:52 | |
lucasagomes | yea that's a simple one I will give a try/review the one actually breaking the lock later | 15:53 |
devananda | yuriyz: one nit - the commit message says closes-bug, shouldn't it be partial-bug? | 15:55 |
*** igor__ has quit IRC | 15:57 | |
NobodyCam | wow TripleO jobs in zuul for nearly 64 hours | 16:00 |
*** igor_ has joined #openstack-ironic | 16:03 | |
NobodyCam | brb | 16:04 |
*** igor_ has quit IRC | 16:08 | |
lucasagomes | devananda, NobodyCam great news! | 16:12 |
lucasagomes | I will see you guys in sunnyvalle :) | 16:13 |
NobodyCam | woo hoo | 16:13 |
NobodyCam | oh how about romcheg and max_lobur_afk ? | 16:13 |
romcheg | NobodyCam: I don't know yet | 16:13 |
rloo | lucasagomes: sweet! | 16:13 |
NobodyCam | morning rloo :) | 16:14 |
lucasagomes | rloo, morning :D | 16:14 |
lucasagomes | rloo, see ya there as well? | 16:14 |
romcheg | Please remind me how much time before the event do I have? | 16:14 |
rloo | lucasagomes: question about https://review.openstack.org/#/c/71211/. Should 'reservation' be added to sample()? | 16:14 |
rloo | morning NobodyCam! | 16:14 |
NobodyCam | its march 3 thru 7th | 16:14 |
lucasagomes | rloo, good catch! Yes, now that it's part of the Node resource | 16:14 |
rloo | lucasagomes: am thinking about it. i can be there for part of it and am not sure that is worth it. was going to ask. | 16:15 |
NobodyCam | romcheg: you'll be core by then, so you have to go :-p | 16:15 |
romcheg | NobodyCam: I still did not manage to get a visa. I will poke management team here to speed up the process | 16:15 |
devananda | lucasagomes: awesome! | 16:15 |
lucasagomes | rloo, I see, I've been to the last mid-cycle in seattle | 16:16 |
lucasagomes | I can say it was very useful | 16:16 |
NobodyCam | :) | 16:16 |
rloo | What is done in a mid-cycle? 5 days seems like a lot. | 16:16 |
lucasagomes | it was more about tripleo, but we managed to talk about ironic | 16:16 |
lucasagomes | the cli started there :) | 16:16 |
devananda | we should have more ironic folks this time -- last time it was just, what, 4 of us? | 16:16 |
lucasagomes | rloo, talks, making decisions, fixing a lot of bugs | 16:16 |
devananda | also the project is much further along | 16:16 |
devananda | we have real things that work now! :-D | 16:17 |
rloo | ahh. hmm. i can be there wed-fri, but I am worried that 'most' of the good stuff will be mon-tues. | 16:17 |
lucasagomes | devananda, yea last time was: me, you, NobodyCam and martyntaylor | 16:17 |
rloo | wow, how ironic has grown ;) | 16:17 |
devananda | rloo: i suspect friday, not much will happen | 16:17 |
rloo | devananda. so it is 'worth' being there wed, thurs then? | 16:18 |
devananda | rloo: missing mon-tue will hurt, but wed-thu should be productive still | 16:18 |
NobodyCam | rloo: yes! | 16:18 |
devananda | hurt in the sense of, we'll probbably change everything on tuesday ;) | 16:18 |
NobodyCam | I think so | 16:18 |
NobodyCam | lol | 16:18 |
lucasagomes | hah | 16:18 |
rloo | ha ha. ok. I'll look into it then. Seems like Tues is the day! | 16:20 |
devananda | sweet | 16:21 |
devananda | got greenthreads to do what i wanted | 16:21 |
*** jcooley_ has joined #openstack-ironic | 16:21 | |
lucasagomes | nice! | 16:21 |
NobodyCam | woo | 16:22 |
*** bashok has quit IRC | 16:22 | |
openstackgerrit | Devananda van der Veen proposed a change to openstack/ironic: DO NOT MERGE: mock of a greenthread pool https://review.openstack.org/71281 | 16:23 |
devananda | max_lobur_afk: borrowed your task_manager changes and yes, that's all I was missing. see ^ for a way to use greenthread api to clean up. It's still just a mock, probably has some holes that you'll see ... :) | 16:24 |
devananda | lucasagomes: that fixes the race condition | 16:25 |
devananda | lucasagomes: without a lot of changes | 16:25 |
devananda | https://review.openstack.org/#/c/71281/2/ironic/conductor/manager.py | 16:26 |
lucasagomes | ah sweet /me will take a look | 16:26 |
*** jcooley_ has quit IRC | 16:28 | |
openstackgerrit | Jarrod Johnson proposed a change to stackforge/pyghmi: Make pyghmi tolerate arbitrary threading models https://review.openstack.org/71174 | 16:31 |
devananda | jbjohnso: hah, have you been watching our threading discussions? :) | 16:32 |
*** rloo has quit IRC | 16:33 | |
*** rloo has joined #openstack-ironic | 16:33 | |
*** jcooley_ has joined #openstack-ironic | 16:33 | |
*** rloo has quit IRC | 16:34 | |
*** rloo has joined #openstack-ironic | 16:35 | |
openstackgerrit | Jarrod Johnson proposed a change to stackforge/pyghmi: Make pyghmi tolerate arbitrary threading models https://review.openstack.org/71174 | 16:38 |
*** Alexei_987 has quit IRC | 16:40 | |
rloo | Hi NobodyCam: https://bugs.launchpad.net/ironic/+bug/1261915. icbw, and it was awhile ago, but I thought you had mentioned something about this? | 16:50 |
* NobodyCam looks | 16:51 | |
NobodyCam | ahh yes | 16:52 |
NobodyCam | rloo: I came up with this https://review.openstack.org/#/c/51328/15/nova/virt/ironic/ironic_driver_fields.py | 16:53 |
NobodyCam | for the nova driver | 16:53 |
NobodyCam | but thought it would be really cool if there was a way to query for what are is required | 16:54 |
NobodyCam | so we didn't have to maintain a static mapping file | 16:54 |
rloo | NobodyCam: ah, i thought you had done something. | 16:55 |
NobodyCam | lol if you count a static mapping file well then... :-p | 16:55 |
NobodyCam | I see by the the bouncing bubbie that it is BBT... so i'll BRB :) | 16:57 |
rloo | NobodyCam. I should take a look at your ironic driver... seems like if we provide a way of querying, there's going to be *some* sort of static list/map. | 16:57 |
NobodyCam | rloo: it is our driver :) | 16:58 |
rloo | oh yeah, sorry NobodyCam. Our driver! ha ha. | 16:59 |
*** athomas has joined #openstack-ironic | 16:59 | |
*** rloo has quit IRC | 17:03 | |
*** rloo has joined #openstack-ironic | 17:03 | |
*** igor_ has joined #openstack-ironic | 17:04 | |
NobodyCam | hehehehe L( | 17:05 |
NobodyCam | s/L(/:)/ | 17:05 |
lucasagomes | I've a mock on the fixtures, someone knows hw to disable that mock for a particular test? | 17:08 |
lucasagomes | I tried to stop() but that doesn't work | 17:08 |
lucasagomes | I mean it works, but it doesn't "remove" the mock | 17:09 |
*** igor_ has quit IRC | 17:09 | |
*** ndipanov is now known as ndipanov_gone | 17:09 | |
*** ifarkas_ has quit IRC | 17:10 | |
NobodyCam | lucasagomes: https://code.google.com/p/googlemock/wiki/FrequentlyAskedQuestions#How_can_I_delete_the_mock_function's_argument_in_an_action? | 17:13 |
NobodyCam | wait nope thats not it | 17:14 |
lucasagomes | hah yea it's google mock lib in c++ | 17:14 |
rloo | lucasagomes. did you try setting the variable to the mock, to None? | 17:18 |
lucasagomes | rloo, not really lemme try | 17:19 |
rloo | lucasagomes, after stopping. | 17:19 |
lucasagomes | # stop _get_client mock | 17:19 |
lucasagomes | self.mock_cli.stop() | 17:19 |
lucasagomes | self.mock_cli = None | 17:19 |
lucasagomes | same error :( | 17:19 |
lucasagomes | urgh! | 17:19 |
rloo | hmm. I think that's how I've disabled it before. what error? | 17:20 |
rloo | or maybe I thought it was disabled... | 17:20 |
lucasagomes | this _get_client | 17:21 |
lucasagomes | calls an ironic_client.get_client | 17:21 |
*** hemna_ is now known as hemna | 17:22 | |
lucasagomes | so in the test of the _get_client method I have an assert testing it ironic_client.get_client was called with the right parameters | 17:22 |
lucasagomes | AssertionError: Expected to be called once. Called 0 times. | 17:22 |
*** jcooley_ has quit IRC | 17:22 | |
rloo | so ironic_client.get_client is mocked | 17:22 |
NobodyCam | lucasagomes: can you .andreturn=ironic_client.get_client> | 17:23 |
*** jcooley_ has joined #openstack-ironic | 17:23 | |
devananda | lucasagomes: make a different test class | 17:26 |
devananda | lucasagomes: so the mock during __init__ isn't there | 17:26 |
*** jistr has quit IRC | 17:26 | |
lucasagomes | devananda, heh, that would work but sounds a bit overkill :P | 17:26 |
devananda | lucasagomes: it sounds like most tests want the mock in place, but you also want a test to make sure that the thing you're mocking is going to work | 17:26 |
lucasagomes | devananda, exactly, only 2 tests I dont | 17:27 |
devananda | so one class has one test: test the thing. other class ha lots of tests: assume first thing works ,test the rest | 17:27 |
lucasagomes | need that mock | 17:27 |
lucasagomes | I c | 17:27 |
lucasagomes | yea that would work | 17:27 |
devananda | not overkill. it's easier to understand than starting and stopping the mock | 17:27 |
devananda | but put it in the same file/module :) | 17:28 |
lucasagomes | ack | 17:29 |
lucasagomes | thanks :D | 17:29 |
*** athomas has quit IRC | 17:33 | |
devananda | going to be afk for a few hours | 17:33 |
devananda | lucasagomes: your devstack walkthrough still accurate? | 17:33 |
*** jcooley_ has quit IRC | 17:34 | |
lucasagomes | devananda, I think so, I've been using it to test the driver | 17:34 |
devananda | great | 17:34 |
devananda | i'll give that a shot this afternoon and update the pad if needed | 17:34 |
*** jcooley_ has joined #openstack-ironic | 17:34 | |
lucasagomes | devananda, ack, thanks! | 17:34 |
*** tatyana has quit IRC | 17:35 | |
devananda | gonna fix up my neutron patch -- would love to see tha tland soon :) | 17:36 |
devananda | *then go afk :) | 17:37 |
NobodyCam | :) | 17:37 |
lucasagomes | +2!!! | 17:37 |
openstackgerrit | Devananda van der Veen proposed a change to openstack/ironic: Implement _update_neutron in PXE driver https://review.openstack.org/70468 | 17:38 |
openstackgerrit | Devananda van der Veen proposed a change to openstack/ironic: Fix log and test for NeutronAPI.update_port_dhcp_opts https://review.openstack.org/71094 | 17:38 |
*** Haomeng has quit IRC | 17:38 | |
*** Haomeng has joined #openstack-ironic | 17:39 | |
lucasagomes | a-ha found a way (kinda tricky) to disable that mock... I will leave like this but we can put in another test class if that looks too tricky | 17:41 |
devananda | https://review.openstack.org/#/c/64711/3 and https://review.openstack.org/#/c/70267/1 both need reviews too | 17:41 |
*** jcooley_ has quit IRC | 17:41 | |
*** jcooley_ has joined #openstack-ironic | 17:42 | |
devananda | ok - really AFK now. | 17:44 |
*** matty_dubs is now known as matty_dubs|lunch | 17:53 | |
lucasagomes | NobodyCam, I will submit a new patchset adding the tests (not completed yet, tomorrow I will add more) | 17:55 |
NobodyCam | lucasagomes: awesome!!!!!! | 17:55 |
lucasagomes | yea not yet, but we are getting there :D | 17:56 |
NobodyCam | :) | 17:56 |
lucasagomes | https://review.openstack.org/#/c/51328/17/nova/tests/virt/ironic/test_driver.py | 17:57 |
*** aignatov_ is now known as aignatov | 17:57 | |
NobodyCam | lucasagomes: that is looking great! | 18:00 |
NobodyCam | TY for all the assistence! | 18:00 |
*** derekh has quit IRC | 18:00 | |
lucasagomes | NobodyCam, cheers :) ah np! | 18:00 |
lucasagomes | let's get it ready for review and merged soon! | 18:01 |
NobodyCam | ya | 18:01 |
NobodyCam | ahhh I have a question | 18:01 |
lucasagomes | sure | 18:01 |
openstackgerrit | Devananda van der Veen proposed a change to openstack/ironic: Implement a multiplexed VendorPassthru example https://review.openstack.org/70863 | 18:02 |
NobodyCam | question is about nova service state | 18:02 |
NobodyCam | https://github.com/openstack/nova/blob/master/nova/scheduler/host_manager.py#L448-L451 | 18:02 |
NobodyCam | is that a tulpe as a dict key | 18:03 |
lucasagomes | lemme take a look | 18:04 |
lucasagomes | NobodyCam, it looks like it's a tuple yea | 18:05 |
*** igor__ has joined #openstack-ironic | 18:05 | |
NobodyCam | ya, Now /me need to learn how to set that | 18:06 |
*** igor__ has quit IRC | 18:10 | |
lucasagomes | NobodyCam, https://review.openstack.org/#/c/13920/25/nova/scheduler/host_manager.py L341 | 18:10 |
lucasagomes | looking if it has changed | 18:11 |
lucasagomes | https://github.com/openstack/nova/blob/master/nova/tests/scheduler/fakes.py#L162-L175 | 18:13 |
* lucasagomes is confused | 18:13 | |
lucasagomes | seems there's no format to be honest | 18:13 |
NobodyCam | yq I tried : http://paste.openstack.org/show/5D37KvDYkOuKwV8OplgX/ | 18:15 |
NobodyCam | working scheduler tests | 18:15 |
lucasagomes | right | 18:16 |
lucasagomes | we have our own hostmanager right? | 18:16 |
NobodyCam | si | 18:16 |
lucasagomes | I think it will be easier to figure out things if we know hw the self.host_state_map looks like | 18:18 |
lucasagomes | that method returns an interator for that datasctruct (which is a dict) | 18:19 |
*** jcooley_ has quit IRC | 18:19 | |
NobodyCam | seems I need to do a quick walkies.. brb | 18:19 |
lucasagomes | right and I'm done for today as well | 18:20 |
lucasagomes | will prepare something for dinner | 18:20 |
lucasagomes | have a good night everybody | 18:20 |
lucasagomes | night devananda NobodyCam rloo | 18:21 |
*** lucasagomes has quit IRC | 18:21 | |
*** jcooley_ has joined #openstack-ironic | 18:21 | |
*** jcooley_ has quit IRC | 18:23 | |
*** jcooley_ has joined #openstack-ironic | 18:23 | |
*** aignatov is now known as aignatov_ | 18:24 | |
*** aignatov_ is now known as aignatov | 18:30 | |
jbjohnso | devananda, actually, no | 18:31 |
jbjohnso | devananda, I've not been paying much attention | 18:31 |
openstackgerrit | Ghe Rivero proposed a change to openstack/ironic: Set boot device to PXE when deploying https://review.openstack.org/71332 | 18:32 |
jbjohnso | I have been doing profiling and being disappointed that while my threaded code is much easier to follow | 18:32 |
jbjohnso | it is not nearly as nice on cpu time... | 18:32 |
*** digambar has joined #openstack-ironic | 18:32 | |
jbjohnso | the commit was because all pyghmi stuff did have to be kept on a thread | 18:32 |
jbjohnso | if you patched it with eventlet.green stuff | 18:33 |
jbjohnso | because eventlet would raise a RuntimeError because multiple things touched the same filehandle | 18:33 |
jbjohnso | which is something only a crazy person would do | 18:33 |
jbjohnso | so I pulled out the io touching things into a distinct thread | 18:33 |
*** zul has quit IRC | 18:33 | |
digambar | Hey | 18:34 |
digambar | updated the libvert to 1.0.2 | 18:34 |
jbjohnso | when I have a bunch of servers doing console in a process, that process takes like 15% of one cpu | 18:34 |
jbjohnso | a bunch being defined as about 7 | 18:35 |
jbjohnso | but it seems to not be changing much with scale... | 18:35 |
digambar | Hello NobodyCam | 18:35 |
NobodyCam | hey digambar :) | 18:36 |
digambar | Done with libert 1.0.3 | 18:36 |
digambar | now wants to start work on ironic | 18:36 |
digambar | please suggest me any bug for ironic | 18:37 |
jbjohnso | one of my next changes will be to adjust the sockets to be pooled, because I project that the socket sharing for SOL will far apart at default rmem at about 800 servers or so | 18:37 |
*** zul has joined #openstack-ironic | 18:37 | |
jbjohnso | tuning servers per client socket based on the socket buffer size would be nice and let me delete some code to nicely balance that | 18:38 |
digambar | Hello NobodyCam | 18:40 |
digambar | please suggest me from where should I start for dev in ironic | 18:40 |
digambar | so that I can start immidiately on the bug | 18:41 |
NobodyCam | digambar: one sec | 18:41 |
digambar | okk | 18:41 |
*** matty_dubs|lunch is now known as matty_dubs | 18:44 | |
*** aignatov is now known as aignatov_ | 18:46 | |
NobodyCam | digambar: how about this one : https://bugs.launchpad.net/ironic/+bug/1276393?? | 18:46 |
digambar | let me check | 18:46 |
digambar | cool | 18:49 |
digambar | let me start with this one | 18:49 |
*** lynxman has quit IRC | 18:51 | |
*** lynxman_ has joined #openstack-ironic | 18:51 | |
*** lynxman_ is now known as lynxman | 18:51 | |
*** lynxman has quit IRC | 18:51 | |
*** lynxman has joined #openstack-ironic | 18:51 | |
NobodyCam | awesome digambar :) please assign your self to the bug | 18:51 |
digambar | yep | 18:53 |
*** anniec has joined #openstack-ironic | 18:55 | |
*** mdurnosvistov_ has joined #openstack-ironic | 18:56 | |
digambar | ironic command not displayed | 19:01 |
digambar | why so | 19:01 |
*** hemna has quit IRC | 19:01 | |
*** anniec has quit IRC | 19:02 | |
digambar | root@openstack:/opt/stack# ironic ironic: command not found | 19:03 |
*** rloo has quit IRC | 19:03 | |
digambar | . | 19:03 |
digambar | Is there somethink I am missing here | 19:03 |
*** rloo has joined #openstack-ironic | 19:03 | |
*** rloo has quit IRC | 19:04 | |
*** harlowja_away is now known as harlowja | 19:05 | |
digambar | Hey NobodyCam | 19:05 |
NobodyCam | hey digambar sorry was on another window | 19:05 |
digambar | NP | 19:05 |
digambar | but ironic command is not found | 19:05 |
digambar | though I have done the source openrc | 19:06 |
*** igor__ has joined #openstack-ironic | 19:06 | |
NobodyCam | how did you build your stack devstack or diskimage builder? | 19:06 |
digambar | is there anythink I am missing here | 19:06 |
digambar | devstack | 19:06 |
*** jcooley_ has quit IRC | 19:07 | |
NobodyCam | did you install ironic? | 19:07 |
openstackgerrit | Jarrod Johnson proposed a change to stackforge/pyghmi: Make pyghmi tolerate arbitrary threading models https://review.openstack.org/71174 | 19:07 |
digambar | yes | 19:08 |
*** jcooley_ has joined #openstack-ironic | 19:08 | |
NobodyCam | is it in your path? | 19:08 |
digambar | yes | 19:09 |
digambar | I am seeing service both ir-api & ir-cond running @ screen | 19:09 |
digambar | I am able to do nova list | 19:09 |
digambar | but not ironic related command | 19:09 |
NobodyCam | what do you get with 'which ironic' | 19:10 |
digambar | nothing | 19:10 |
digambar | no output | 19:10 |
*** igor__ has quit IRC | 19:10 | |
digambar | 9$ ir-api* 10$ ir-cond | 19:11 |
digambar | see above service found @ screen also | 19:11 |
digambar | with running | 19:11 |
NobodyCam | try find / -name ironic | 19:11 |
digambar | Got it | 19:12 |
*** rloo has joined #openstack-ironic | 19:12 | |
digambar | python-ironicclient is not installed | 19:13 |
digambar | with devstack | 19:13 |
NobodyCam | ahh | 19:13 |
openstackgerrit | A change was merged to stackforge/pyghmi: Make pyghmi tolerate arbitrary threading models https://review.openstack.org/71174 | 19:13 |
NobodyCam | that would do it | 19:13 |
NobodyCam | :-p | 19:13 |
digambar | from where should I dowload it | 19:14 |
digambar | any link | 19:14 |
NobodyCam | pip install python-ironicclient | 19:14 |
digambar | okk | 19:15 |
*** coolsvap is now known as coolsvap_away | 19:15 | |
*** rloo_ has joined #openstack-ironic | 19:16 | |
*** rloo_ has quit IRC | 19:16 | |
*** rloo_ has joined #openstack-ironic | 19:17 | |
*** rloo has quit IRC | 19:17 | |
digambar | Hey | 19:24 |
digambar | while running command | 19:24 |
digambar | PATH=$PATH:../tripleo-incubator/scripts/ ./tripleo-incubator/scripts/create-nodes 1 512 10 amd64 1 | 19:24 |
digambar | getting error | 19:25 |
digambar | virsh: /usr/lib/libvirt-lxc.so.0: version `LIBVIRT_LXC_1.0.4' not found (required by virsh) | 19:25 |
digambar | virsh: /usr/lib/libvirt.so.0: version `LIBVIRT_1.1.3' not found (required by virsh) | 19:25 |
digambar | Hi NobodyCam | 19:26 |
NobodyCam | Hey digambar | 19:26 |
digambar | while creating mac it gives me error | 19:26 |
digambar | virsh: /usr/lib/libvirt.so.0: version `LIBVIRT_1.1.3' not found (required by virsh) | 19:26 |
NobodyCam | does something like 'virsh list' work form the command line | 19:26 |
NobodyCam | ? | 19:26 |
digambar | yes | 19:26 |
digambar | when I run | 19:27 |
digambar | PATH=$PATH:../tripleo-incubator/scripts/ ./tripleo-incubator/scripts/create-nodes 1 512 10 amd64 1 | 19:27 |
digambar | in the last once it going to create mac | 19:27 |
digambar | gives me error | 19:27 |
digambar | virsh: /usr/lib/libvirt.so.0: version `LIBVIRT_1.1.3' not found (required by virsh) | 19:27 |
* NobodyCam looks at create nodes script | 19:28 | |
digambar | root@openstack:/opt/stack# virsh --version 1.0.3 | 19:29 |
digambar | root@openstack:/opt/stack# virsh list | 19:29 |
digambar | Id Name State ---------------------------------------------------- | 19:30 |
NobodyCam | digambar: what is your base os? | 19:30 |
digambar | ubuntu | 19:30 |
digambar | ubuntu 12.04 | 19:30 |
digambar | 32bit | 19:30 |
NobodyCam | digambar: Have you ever used http://paste.openstack.org ? | 19:30 |
digambar | no | 19:31 |
NobodyCam | ahh ^^^ is a much better why to send multi line posts | 19:31 |
digambar | okk | 19:31 |
NobodyCam | paste to url and just put the link in IRC | 19:31 |
*** igor___ has joined #openstack-ironic | 19:32 | |
digambar | I'll do it from next time | 19:32 |
NobodyCam | :) | 19:32 |
digambar | but after the upgration of libvert package this error is getting | 19:32 |
digambar | I haven't found the reason behind this | 19:33 |
NobodyCam | ya some thing in libvirt seems broken | 19:33 |
NobodyCam | find / -name libvirt.so.0 | 19:34 |
NobodyCam | is that file on the sysytem? | 19:34 |
digambar | yes | 19:34 |
digambar | its under /usr/list | 19:34 |
digambar | hey | 19:35 |
digambar | its under /usr/lib | 19:35 |
digambar | path /usr/lib/libvirt.so.0 | 19:35 |
NobodyCam | how about owner ship and chmod settings? | 19:36 |
*** igor___ has quit IRC | 19:37 | |
digambar | lrwxrwxrwx 1 root root 19 Feb 5 09:31 /usr/lib/libvirt.so.0 -> libvirt.so.0.1000.3 | 19:37 |
digambar | root & all the permission it has | 19:38 |
NobodyCam | is libvirt.so.0.1000.3 there too.. that is just a sym link | 19:39 |
*** rloo_ has quit IRC | 19:39 | |
digambar | yes | 19:39 |
*** rloo has joined #openstack-ironic | 19:40 | |
*** rloo has quit IRC | 19:40 | |
*** rloo has joined #openstack-ironic | 19:40 | |
*** rloo has quit IRC | 19:41 | |
devananda | back for a bit | 19:46 |
NobodyCam | wB devananda how was the run | 19:46 |
*** rloo has joined #openstack-ironic | 19:47 | |
*** rloo has joined #openstack-ironic | 19:48 | |
*** rloo has joined #openstack-ironic | 19:49 | |
devananda | good. it's cold out there! | 19:49 |
*** rloo has quit IRC | 19:49 | |
*** rloo has joined #openstack-ironic | 19:50 | |
*** rloo has quit IRC | 19:52 | |
*** rloo has joined #openstack-ironic | 19:53 | |
*** rloo has quit IRC | 19:53 | |
*** igor___ has joined #openstack-ironic | 20:03 | |
*** igor____ has joined #openstack-ironic | 20:05 | |
*** igor___ has quit IRC | 20:07 | |
*** martyntaylor has left #openstack-ironic | 20:12 | |
*** digambar has quit IRC | 20:14 | |
*** romcheg has left #openstack-ironic | 20:32 | |
*** rloo has joined #openstack-ironic | 20:52 | |
*** max_lobur has joined #openstack-ironic | 20:58 | |
max_lobur | finally back | 20:59 |
NobodyCam | WB max_lobur | 20:59 |
max_lobur | am I missed something important? (not able to read scrollback now:( ) | 20:59 |
max_lobur | thanks :) | 20:59 |
NobodyCam | nope :-p | 21:00 |
max_lobur | :) | 21:00 |
NobodyCam | oh also http://eavesdrop.openstack.org/irclogs/ | 21:01 |
NobodyCam | :-p in case you ever need | 21:01 |
*** rloo has quit IRC | 21:04 | |
*** rloo has joined #openstack-ironic | 21:05 | |
max_lobur | ah cool | 21:07 |
max_lobur | I forgot about logging | 21:07 |
NobodyCam | :-p | 21:08 |
devananda | construction noise has gotten too loud after lunch ... i'm heading to a coffee shop to work. bbiab | 21:09 |
NobodyCam | enjoy the GOOD coffe | 21:09 |
NobodyCam | :-p | 21:09 |
*** rloo has quit IRC | 21:12 | |
*** rloo has joined #openstack-ironic | 21:13 | |
*** rloo has quit IRC | 21:13 | |
*** rloo has joined #openstack-ironic | 21:14 | |
*** mrda_away is now known as mrda | 21:17 | |
* mrda is working from a cafe this morning | 21:17 | |
NobodyCam | :) | 21:19 |
NobodyCam | nice /me feels left out | 21:19 |
*** rloo has quit IRC | 21:22 | |
*** rloo has joined #openstack-ironic | 21:22 | |
NobodyCam | brb | 21:23 |
*** blamar has joined #openstack-ironic | 21:25 | |
*** blamar has quit IRC | 21:26 | |
*** blamar has joined #openstack-ironic | 21:26 | |
*** romcheg has joined #openstack-ironic | 21:29 | |
*** rloo has quit IRC | 21:32 | |
*** rloo has joined #openstack-ironic | 21:32 | |
*** rloo has quit IRC | 21:33 | |
*** rloo has joined #openstack-ironic | 21:34 | |
*** rloo has quit IRC | 21:49 | |
*** datajerk1 has quit IRC | 22:04 | |
*** jdob has quit IRC | 22:07 | |
*** jbjohnso has quit IRC | 22:07 | |
*** mrda is now known as mrda_away | 22:18 | |
*** matty_dubs is now known as matty_dubs|gone | 22:25 | |
max_lobur | devananda: I added my comment to https://review.openstack.org/#/c/71281/2 (race using green pool) | 22:29 |
* devananda looks | 22:30 | |
devananda | max_lobur: so, there's one important difference between futures and greenpool | 22:32 |
*** mdurnosvistov_ has quit IRC | 22:32 | |
devananda | max_lobur: http://eventlet.net/doc/modules/greenpool.html#eventlet.greenpool.GreenPool.spawn | 22:32 |
devananda | max_lobur: "If the pool is currently at capacity, spawn will block until one of the running greenthreads completes its task and frees up a slot. | 22:32 |
max_lobur | ahh | 22:33 |
devananda | with futures, the max_threads setting determines # of workers that run in parallel, but there is unlimited # of jobs that an queue up | 22:33 |
max_lobur | so it won't queue too much | 22:33 |
devananda | right | 22:33 |
devananda | we'll actually get blocking if the worker pool is at capacity | 22:33 |
devananda | which we want | 22:33 |
max_lobur | yea I see now | 22:34 |
max_lobur | yes, makes sense | 22:34 |
max_lobur | then I'll rework my patch to futures | 22:35 |
max_lobur | gah | 22:35 |
max_lobur | to greenpool :) | 22:35 |
devananda | :) | 22:35 |
max_lobur | what about sync/async parts of the job? do you think it's better to control that at manager level (e.g. call sync part and then call async within greenthread) | 22:36 |
max_lobur | or share worker_pool and control that from utils | 22:37 |
devananda | max_lobur: what sync parts are you referring to? | 22:37 |
max_lobur | or define worker_pool inside thread util that I created | 22:37 |
devananda | my view is that job should be run async, what ever the "job" is defined as | 22:38 |
devananda | no -- worker pool must be a singleton // class property | 22:38 |
devananda | since the condcutor.ConductorManager class wil only have one instance created, a class property there is effectively a singleton | 22:38 |
max_lobur | module level for thread util will serve for that too | 22:39 |
max_lobur | module imported only once | 22:39 |
max_lobur | https://review.openstack.org/#/c/69135/1/ironic/conductor/utils.py | 22:39 |
max_lobur | _validate_node_power_action(task, node, state): | 22:39 |
max_lobur | this is what I'd like to do sync | 22:39 |
max_lobur | _perform_node_power_action(task, node, state): - this one should be async | 22:39 |
devananda | ah, so | 22:42 |
devananda | i'd like to get rid of all the validate calls as separate RPC things | 22:42 |
devananda | even validate_vendor_passthru | 22:42 |
devananda | the only reason those existed was we had to make call+cast until now | 22:42 |
devananda | also, even internally, i dont think we need them as separate steps | 22:42 |
devananda | - take lock | 22:43 |
max_lobur | yep, I understand | 22:43 |
devananda | -- start thread (to do work) | 22:43 |
devananda | -- return | 22:43 |
devananda | --- thread does work | 22:43 |
devananda | --- thread updates node.last_error if it fails | 22:43 |
devananda | --- thread returns to pool when complete | 22:43 |
max_lobur | hmm | 22:44 |
max_lobur | so | 22:44 |
max_lobur | task.driver.power.validate(node) | 22:44 |
max_lobur | will be async too | 22:45 |
max_lobur | the user should have to poll to know it's result | 22:45 |
max_lobur | my thoughts was to perform all quick validation in a sync way | 22:46 |
max_lobur | also there are a bunch of validations directly in the api code | 22:46 |
max_lobur | https://review.openstack.org/#/c/69135/1/ironic/api/controllers/v1/node.py see my TODOs | 22:47 |
max_lobur | I'd like to move them to conductor, so we can have all our rules defined in one placec | 22:47 |
devananda | hmm | 22:48 |
max_lobur | we may not request DB from the API at all - no need to get rpc node | 22:48 |
devananda | iterating through the driver.interface.validate() method calls | 22:48 |
max_lobur | we just passing node uuid and desired state to conductor and it does all the job | 22:49 |
devananda | could take a bit, but otehrwise how do we get the result correctly back to the user (without a separate request tracking service) | 22:49 |
devananda | i mean for GET /v1/nodes/XXXX/validate | 22:49 |
devananda | that has to be sync | 22:49 |
max_lobur | well, I meant another | 22:49 |
max_lobur | GET /v1/nodes/XXXX/validate will be sync | 22:50 |
max_lobur | because it's a separate request, we're already doing RPC "call" there right? | 22:50 |
* NobodyCam takes a few minutes to look for some food stuffs | 22:51 | |
max_lobur | I was talking about task.driver.power.validate(node) which is called within "change_node_power_state" | 22:51 |
max_lobur | I thought we will make it sync too | 22:51 |
max_lobur | https://github.com/openstack/ironic/blob/master/ironic/api/controllers/v1/node.py#L531-L537 | 22:55 |
max_lobur | https://github.com/openstack/ironic/blob/master/ironic/conductor/rpcapi.py#L219 | 22:55 |
max_lobur | GET /v1/nodes/XXXX/validate will always be sync, it's a separate method | 22:56 |
*** romcheg has left #openstack-ironic | 22:57 | |
* max_lobur re-reads discussion | 23:00 | |
max_lobur | RE: but otehrwise how do we get the result correctly back to the user | 23:00 |
max_lobur | true | 23:00 |
max_lobur | it will appear in last error, and in logs | 23:01 |
max_lobur | it may be not his error right? e.g. how should he know that he issued that last_error | 23:01 |
devananda | mmm, back. was distracted in another channel | 23:02 |
devananda | max_lobur: right, so calling task.driver.power.validate(node) from within change_node_power_state should be part of the async job | 23:03 |
devananda | or rather | 23:03 |
devananda | we shouldn't call it explicitly | 23:03 |
devananda | it's the driver's responsibility to fail sanely if it is lackig info when we call driver.power.set_power_state | 23:03 |
max_lobur | so, your'e proposing to remove this https://github.com/openstack/ironic/blob/master/ironic/conductor/utils.py#L37 | 23:06 |
max_lobur | right? | 23:06 |
*** anniec has joined #openstack-ironic | 23:06 | |
max_lobur | so the user will have to issue GET /v1/nodes/XXXX/validate to validate separately | 23:07 |
devananda | max_lobur: user may or may not issue validate -- that is up to user | 23:07 |
devananda | max_lobur: if user issues PUT /v1/nodes/xxx/state/power {'on'}, then this will syncronously *start* the job | 23:08 |
max_lobur | agree | 23:08 |
devananda | user will know if job starts or fails to start | 23:08 |
devananda | but no more than that | 23:08 |
max_lobur | k | 23:08 |
max_lobur | got it | 23:08 |
devananda | then user will GET /v1/nodes/XXX | 23:08 |
devananda | to observe the state | 23:08 |
max_lobur | or last_error | 23:09 |
devananda | right | 23:09 |
devananda | now... | 23:09 |
max_lobur | this will simplify my patch | 23:09 |
devananda | this resolves the race that was causing nova.spawn to fail | 23:09 |
devananda | if the user sees that job failed to start (NodeLocked exception) he can wait & retry some # of times | 23:09 |
devananda | and get status, etc, to make sure things are OK to continue | 23:10 |
devananda | it does NOT solve a separate race that we also have today | 23:10 |
devananda | where multiple jobs run SUCCESSFULLY in sequence | 23:10 |
devananda | ^D^D^D | 23:10 |
devananda | where multiple jobs start successfuly in sequence | 23:10 |
devananda | and the status of all N-1 jobs are overwritten by job N | 23:10 |
devananda | this is not a problem for noav, where it should be the only "user" requesting power or deploy changes | 23:11 |
devananda | and there's no way to solve it today | 23:11 |
devananda | we need separate "job" service to track each request's status | 23:11 |
devananda | in Juno :) | 23:11 |
max_lobur | can't get the last | 23:12 |
max_lobur | RE: where multiple jobs start successfuly in sequence | 23:12 |
max_lobur | you mean start and end? right? | 23:12 |
devananda | user A starts job 1 | 23:12 |
devananda | job 1 finishes | 23:12 |
devananda | user B starts job 2 | 23:12 |
devananda | job 2 finishes | 23:12 |
max_lobur | yep, I see | 23:12 |
devananda | user A gets node status to see what happened to job 1 | 23:12 |
max_lobur | we need a separate request modeling in db | 23:13 |
max_lobur | yea | 23:13 |
devananda | right | 23:13 |
max_lobur | I though about this as well | 23:13 |
max_lobur | right | 23:13 |
devananda | taht's very much outside the scope for Icehouse | 23:13 |
max_lobur | this is to much for icehouse | 23:13 |
devananda | what we are doing with greenpool solves the important race that is breaking Nova | 23:13 |
max_lobur | k, cool, thanks for you time, I'll rework my patch tomorrow | 23:14 |
devananda | max_lobur: np, thank you! | 23:14 |
devananda | solving provlems like this one is fun :) | 23:15 |
max_lobur | :) | 23:15 |
*** mrda_away is now known as mrda | 23:45 | |
*** epim has joined #openstack-ironic | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!