*** rcernin has joined #openstack-rally | 00:09 | |
*** r-daneel has quit IRC | 00:18 | |
*** tosky has quit IRC | 00:36 | |
*** maeca has quit IRC | 01:03 | |
*** gus has quit IRC | 01:07 | |
*** gus has joined #openstack-rally | 01:08 | |
*** psuriset has quit IRC | 01:32 | |
*** threestrands has joined #openstack-rally | 02:27 | |
*** yamamoto has joined #openstack-rally | 02:37 | |
*** ilbot3 has quit IRC | 02:45 | |
*** pzchen has quit IRC | 02:54 | |
*** pzchen has joined #openstack-rally | 02:55 | |
*** Liuqing has joined #openstack-rally | 02:56 | |
*** Liuqing has quit IRC | 02:57 | |
*** ilbot3 has joined #openstack-rally | 02:58 | |
*** Liuqing has joined #openstack-rally | 03:06 | |
*** r-daneel has joined #openstack-rally | 03:10 | |
*** r-daneel has quit IRC | 03:15 | |
*** maeca has joined #openstack-rally | 03:16 | |
*** ihrachys has quit IRC | 03:19 | |
*** psuriset has joined #openstack-rally | 03:21 | |
*** Liuqing has quit IRC | 03:40 | |
*** dave-mccowan has quit IRC | 03:48 | |
*** lpetrut_ has joined #openstack-rally | 04:09 | |
*** harlowja has quit IRC | 04:12 | |
*** lpetrut_ has quit IRC | 04:31 | |
*** lpetrut_ has joined #openstack-rally | 04:40 | |
*** harlowja has joined #openstack-rally | 04:41 | |
*** lpetrut__ has joined #openstack-rally | 04:41 | |
*** lpetrut_ has quit IRC | 04:44 | |
*** maeca has quit IRC | 04:49 | |
*** lpetrut__ has quit IRC | 04:55 | |
*** lpetrut__ has joined #openstack-rally | 05:23 | |
*** psuriset has quit IRC | 05:28 | |
*** psuriset has joined #openstack-rally | 05:45 | |
*** threestrands has quit IRC | 05:48 | |
*** lpetrut__ has quit IRC | 05:50 | |
*** harlowja has quit IRC | 06:12 | |
*** psuriset has quit IRC | 06:15 | |
rallydev-bot | [From Gitter] cxhuawei : yes it is . What do we want to achieve next? Cleanup resource belong to one trace-id ? | 06:40 |
---|---|---|
rallydev-bot | [From Gitter] cxhuawei : @andreykurilin | 06:40 |
rallydev-bot | [From Gitter] cxhuawei : As Boris said , we will provide capability of cleaning resource of one task and cleaning resource of all tasks . Am I right ? @andreykurilin | 06:43 |
*** lpetrut__ has joined #openstack-rally | 06:51 | |
*** e0ne has joined #openstack-rally | 07:07 | |
*** rcernin has quit IRC | 07:23 | |
*** pcaruana has joined #openstack-rally | 07:47 | |
*** itlinux has joined #openstack-rally | 08:30 | |
*** itlinux has quit IRC | 08:53 | |
*** pcaruana has quit IRC | 09:19 | |
*** pcaruana has joined #openstack-rally | 09:19 | |
*** ushkalim has joined #openstack-rally | 09:35 | |
*** alexchadin has joined #openstack-rally | 10:14 | |
*** tosky has joined #openstack-rally | 10:27 | |
*** pzchen has quit IRC | 10:56 | |
*** aSY0DAD08 has joined #openstack-rally | 11:08 | |
*** rallydev-bot has quit IRC | 11:11 | |
*** juhak has joined #openstack-rally | 11:15 | |
*** psuriset has joined #openstack-rally | 11:43 | |
*** alexchadin has quit IRC | 11:50 | |
*** alexchadin has joined #openstack-rally | 11:51 | |
*** dave-mccowan has joined #openstack-rally | 12:14 | |
*** mvk has quit IRC | 12:22 | |
*** maeca has joined #openstack-rally | 12:59 | |
*** mvk has joined #openstack-rally | 13:04 | |
*** serlex has joined #openstack-rally | 13:08 | |
*** alexchadin has quit IRC | 13:12 | |
*** alexchadin has joined #openstack-rally | 13:21 | |
*** juhak has left #openstack-rally | 13:23 | |
*** yamamoto has quit IRC | 13:55 | |
*** alexchadin has quit IRC | 14:02 | |
*** yamamoto has joined #openstack-rally | 14:17 | |
*** vaidy has quit IRC | 14:37 | |
*** itlinux has joined #openstack-rally | 14:46 | |
*** r-daneel has joined #openstack-rally | 14:47 | |
*** maeca has quit IRC | 14:47 | |
*** itlinux has quit IRC | 14:55 | |
*** itlinux has joined #openstack-rally | 14:59 | |
*** itlinux has quit IRC | 15:00 | |
*** itlinux has joined #openstack-rally | 15:03 | |
*** psuriset has quit IRC | 15:18 | |
*** lpetrut__ has quit IRC | 15:18 | |
*** itlinux has quit IRC | 15:36 | |
*** ihrachys has joined #openstack-rally | 16:03 | |
ihrachys | andreykurilin, hey! | 16:03 |
ihrachys | I have some questions about writing a proper rally scenario, specifically about cleanup contexts | 16:04 |
*** e0ne has quit IRC | 16:22 | |
*** pcaruana has quit IRC | 16:45 | |
andreykurilin | hi ihrachys | 16:57 |
ihrachys | andreykurilin, hey. | 16:58 |
ihrachys | andreykurilin, i have a scenario that pre-creates a network with subnets with a context; then scenario itself deletes all subnets in random order. | 16:58 |
ihrachys | I execute 10 of those concurrently but it probably doesn't matter | 16:59 |
ihrachys | I handle NotFound properly since they clash on removing same subnets | 16:59 |
andreykurilin | ihrachys: feel free to ask any questions and I will try to answer everything. or we can make a call | 16:59 |
ihrachys | so far so good | 16:59 |
andreykurilin | ok | 16:59 |
ihrachys | but when context tries to clean up network, it fails because subnets are gone | 16:59 |
ihrachys | I tried to try: except NotFound: in delete_network but it doesn't seem to help (I think it's the "atomic" event that fails deeper in stack) | 17:00 |
ihrachys | so, question is: is it even desirable / possible to delete context resources in scenario bodies? | 17:00 |
andreykurilin | 1) Do you use standart "networks" context ? 2) Can you share trace? | 17:01 |
ihrachys | yes I use standard context | 17:01 |
andreykurilin | So actually, noone shared such use case before. lol. but anyway failure in the cleanup method of context doesn't affect task results. | 17:03 |
ihrachys | I think it does | 17:03 |
ihrachys | sadly I reworked the scenario a bit, moving it to rally tree, and I don't have immediate way to run the code just now, and no trace. | 17:04 |
ihrachys | but afaik there was no trace | 17:04 |
ihrachys | it was just rally logging not found when cleanup triggers | 17:04 |
ihrachys | but in report, it was delete_subnet x10 event at success rate of 0% | 17:04 |
ihrachys | it seems task doesn't carry logs, just results | 17:05 |
andreykurilin | ok. It looks like I undedrstand what had happened | 17:06 |
andreykurilin | *understood | 17:06 |
ihrachys | but here is the report: https://pastebin.com/ZYP1h2ek | 17:06 |
andreykurilin | ok ok | 17:06 |
andreykurilin | give me a sec for explanation) | 17:06 |
andreykurilin | Do you have something like http://xsnippet.org/363718/ ? | 17:07 |
andreykurilin | in your scenario | 17:08 |
ihrachys | yep | 17:08 |
ihrachys | I should prolly post the patch to have smth to peek at | 17:08 |
openstackgerrit | Ihar Hrachyshka proposed openstack/rally master: Neutron scenario to delete subnets for the same network concurrently https://review.openstack.org/540041 | 17:09 |
ihrachys | andreykurilin, ^ | 17:09 |
ihrachys | scenario here: https://review.openstack.org/#/c/540041/1/rally/plugins/openstack/scenarios/neutron/network.py | 17:09 |
*** mvk has quit IRC | 17:09 | |
*** harlowja has joined #openstack-rally | 17:09 | |
andreykurilin | ok, here are several mistakes | 17:11 |
*** lpetrut has joined #openstack-rally | 17:11 | |
andreykurilin | first of all, let me explain how rally calculates the timings. There is special decorator which wraps methods like _create_subnet, _delete_subnet and other actions. It simply saves started_at and finished_at timings. Also, it marks action as failed if the error had happened. It doesn't matter if you catch exception in the parent method or not, if it failed - it is marked as failed, no mocking results | 17:14 |
*** serlex has quit IRC | 17:15 | |
andreykurilin | so possible, for your case, the custom _delete_subnet should be used. But let's return to the idea of your testing before doing custom things:) | 17:16 |
andreykurilin | ihrachys: ^ | 17:16 |
andreykurilin | Do you want to test just deleting subnets in concurrency way? or you want to delete the SIMILAR subnets in conccurency way? | 17:17 |
ihrachys | what do you mean similar? | 17:17 |
andreykurilin | In current code, each iteration (thread) iterates over the similar list of networks/subnets and calls to removing them | 17:18 |
ihrachys | the goal is to have delete requests for subnets that belong to the same network going in parallel | 17:19 |
ihrachys | because then they can clash deallocating the same addresses in ipam layer | 17:20 |
ihrachys | current scenarios create a network per thread | 17:20 |
ihrachys | which doesn't trigger the issue I try to trigger | 17:20 |
ihrachys | andreykurilin, ^ | 17:26 |
ihrachys | that's why I needed network context, to reuse same network. | 17:27 |
andreykurilin | ihrachys: sorry, need to check something for my boss. will return to your question soon | 17:27 |
ihrachys | ok | 17:28 |
*** maeca has joined #openstack-rally | 17:37 | |
*** e0ne has joined #openstack-rally | 18:14 | |
*** yamamoto has quit IRC | 18:17 | |
*** openstackgerrit has quit IRC | 18:18 | |
*** harlowja has quit IRC | 18:29 | |
*** psuriset has joined #openstack-rally | 18:38 | |
*** tosky has quit IRC | 18:38 | |
*** e0ne has quit IRC | 18:49 | |
*** lpetrut has quit IRC | 19:02 | |
*** tosky has joined #openstack-rally | 19:04 | |
*** harlowja has joined #openstack-rally | 19:09 | |
*** harlowja_ has joined #openstack-rally | 19:11 | |
*** harlowja has quit IRC | 19:14 | |
*** yamamoto has joined #openstack-rally | 19:18 | |
*** yamamoto has quit IRC | 19:29 | |
*** maeca has quit IRC | 19:57 | |
*** maeca has joined #openstack-rally | 20:50 | |
*** maeca has quit IRC | 20:50 | |
*** maeca has joined #openstack-rally | 21:00 | |
*** Leo_m has joined #openstack-rally | 21:06 | |
Leo_m | hi I'm new to rally and wondering if concurrency is limited to the users or not at all | 21:07 |
andreykurilin | Leo_m: hi! it is not limited at all. the one user can be used in several iterations | 21:29 |
andreykurilin | ihrachys: What time zone are you in? | 21:30 |
ihrachys | California | 21:30 |
andreykurilin | got it | 21:31 |
andreykurilin | ihrachys: returning to your case. Some explanation about how rally works: There is users context which is default for all openstack scenarios. It creates temporary users and tenants. Each iteration of workload picks a single users from the pool (https://github.com/openstack/rally/blob/master/rally/plugins/openstack/scenario.py#L67-L89). This user is accessible via self.clients in the `run` method of scenario. | 21:37 |
andreykurilin | If the concurrency is bigger then a number of users, the same user will be used in several threads. | 21:37 |
andreykurilin | Leo_m: this is detailed answer for your question as well ^ | 21:38 |
Leo_m | ok cool | 21:39 |
Leo_m | so each iteration will pick a single user and use multithreading base on concurrency? or an iteration can pick multiple tenants/users? | 21:39 |
andreykurilin | ihrachys: By configuring users context like `tenants: 3, users: 100`, you can garantee than in case of concurency > 3, several parallel requests to the single tenant, network will be performed | 21:40 |
andreykurilin | I mean that each iteration should not try to delete everything | 21:40 |
*** maeca has quit IRC | 21:40 | |
andreykurilin | Leo_m: if the scenario is designed to perform simple actions, each iteration will pick a single user. But if you want, each iteration has access to all users and you can design your plugin to check accessibility/visibility of resource created by user A in user B space | 21:42 |
ihrachys | andreykurilin, if it shouldn't try to delete everything, what should it delete then? | 21:43 |
Leo_m | ok cool | 21:43 |
Leo_m | thx! | 21:43 |
Leo_m | andreykurilin: so if I have 100 times and concurrency 5, in the boot server ex. this means that I will have 100 iterations and in each iteration 5 servers will boot in parallel? | 21:47 |
andreykurilin | ihrachys: just remove one subnet in iteration. something like - http://xsnippet.org/363719/ | 21:50 |
ihrachys | andreykurilin, ok I was missing this "iteration" thingy | 21:50 |
ihrachys | andreykurilin, that won't help me with the issue of cleanup failing because of missing subnet though | 21:51 |
ihrachys | I could probably instead leave network empty, then create / delete in each run() | 21:51 |
andreykurilin | ihrachys: so if the number of networks per tenant is 1 and there is not so many tenants are created by users context, several parallel iterations will try to remove resource from one network | 21:51 |
ihrachys | but that dilutes the scenario a bit since I want to parallelize deletes only | 21:52 |
ihrachys | andreykurilin, ok but what about cleanup | 21:52 |
andreykurilin | ihrachys: just ignore erros of context cleanup :) | 21:53 |
andreykurilin | https://github.com/openstack/rally/blob/master/rally/plugins/openstack/context/network/networks.py#L107-L109 | 21:53 |
andreykurilin | https://github.com/openstack/rally/blob/master/rally/common/logging.py#L152-L161 | 21:53 |
andreykurilin | 1) it doesn't affect results, just logging | 21:54 |
andreykurilin | 2) the failures in your results relate to errors in iterations (several iterations tries to remove the same subnet and catch doesn't help) | 21:55 |
*** maeca has joined #openstack-rally | 21:55 | |
ihrachys | andreykurilin, oh you mean that regardless I catch it, it's still registered as failure by that atomic thingy? | 21:55 |
andreykurilin | yes | 21:56 |
ihrachys | ok makes sense | 21:56 |
ihrachys | thanks! "iteration" is helpful. | 21:56 |
ihrachys | I couldn't find it in docs | 21:56 |
ihrachys | I guess I should have just dir()ed the context object... | 21:56 |
andreykurilin | "networks" context is quite old and cleanup is done in a bad way there. It definitely should be rewritten (I hope that I'll find a volonteer) to not iterate over a list of created resources in setup method, but list real existing resources and filter them. Otherwise, it is not just blocker for your case, but it is blocker for "disaster cleanup" feature (kill rally process at the middle of execution and then try to remove all resources | 21:58 |
andreykurilin | after task) which is almost ready | 21:58 |
andreykurilin | docs... our plan is to release Rally 0.11.0 (this month) and switch to updating them. | 21:59 |
*** threestrands has joined #openstack-rally | 21:59 | |
*** threestrands has quit IRC | 22:00 | |
*** threestrands has joined #openstack-rally | 22:00 | |
andreykurilin | Leo_m: times means "total number of iterations". if you setup 100 times and 5 concurrency, it doesn't mean 500 created servers, it is only 100 iterations | 22:00 |
Leo_m | ohhh got it! 5 concurrency means that 5 iterations can take place in parallel | 22:02 |
andreykurilin | yup | 22:02 |
Leo_m | btw have "start_cidr": "192.168.74.0/24" | 22:03 |
Leo_m | and see a subnet with 192.168.75.0/24 | 22:03 |
andreykurilin | lol | 22:03 |
andreykurilin | it is ok actually, as far as I remember | 22:03 |
Leo_m | I do see the 192.168.74.0/24 subnet too | 22:03 |
Leo_m | guessing that more subnets can be created if the IPs are still reserved/used | 22:04 |
Leo_m | btw, the run still created a project and users and I had put in the config existing users | 22:05 |
Leo_m | guessing I'm missing something | 22:05 |
*** mvk has joined #openstack-rally | 22:07 | |
Leo_m | "type": "ExistingCloud", in the config was throwing an error | 22:07 |
Leo_m | { | 22:08 |
Leo_m | "openstack": { “type”: “ExistingCloud”, | 22:08 |
Leo_m | “auth_url”: ……… | 22:08 |
andreykurilin | ihrachys: several more things that you should know. in my code snippet, I use self.context["iteration"] as identificator to whch resource should be removed (to avoid collision and removing the similar resource). In your case, the iteration number is a bad id, the better id is the number of used users in one tenant. | 22:08 |
andreykurilin | it should sound simplier with example. | 22:08 |
andreykurilin | There is 2 tenants(A and B) with a user in each. First iteration will be launched with user A1, the second iteration will use B1. Despite the fact that iteration number is 2, the second iteration is executed in a tenant which was not used before and it should pick the first subnet in a list to remove. | 22:09 |
andreykurilin | You need to calculate the proper number something like `number = self.context["tenant"]["users"].index(self.context["user"])` | 22:09 |
andreykurilin | and specify `"user_choice_method" : "round_robin"` field for users context. So each iteration will pick not random user, but in a strict order | 22:11 |
andreykurilin | Leo_m: `btw, the run still created a project and users and I had put in the config existing users`. if you have "users" context in your task, rally will try to create a new users since it is what this context is designed for | 22:13 |
ihrachys | andreykurilin, wow. rally seems quite elaborate :) | 22:14 |
ihrachys | great, I have something to chew on now | 22:14 |
andreykurilin | Leo_m: about deployment config. Is it old config? migrated? | 22:14 |
ihrachys | andreykurilin, isn't it quite late for you? | 22:14 |
andreykurilin | ihrachys : unfortunately yes, but I glad to help anytime if I'm not sleeping, lol | 22:16 |
Leo_m | andreykurilin: not sure about the config, I know that removing the type from it worked. | 22:16 |
Leo_m | I do have in the tasks the users in the context, guessing I need to remove that | 22:17 |
Leo_m | so that no new users are created and the ones in the config are used | 22:17 |
andreykurilin | Leo_m: Is it a new configuration or you had existing rally installation and after switching to new version, you performed a db migration and after then the existing deployment became invalid? | 22:17 |
Leo_m | new one | 22:18 |
andreykurilin | ok, so we removed type field because it accepted just one value | 22:18 |
andreykurilin | lol | 22:18 |
Leo_m | lol ok | 22:19 |
*** dave-mccowan has quit IRC | 22:30 | |
*** maeca has quit IRC | 23:18 | |
*** openstackgerrit has joined #openstack-rally | 23:22 | |
openstackgerrit | Merged openstack/rally master: Endpoint_type of private is not valid https://review.openstack.org/539280 | 23:22 |
*** r-daneel has quit IRC | 23:35 | |
*** Leo_m has quit IRC | 23:35 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!