Thursday, 2018-02-01

*** rcernin has joined #openstack-rally00:09
*** r-daneel has quit IRC00:18
*** tosky has quit IRC00:36
*** maeca has quit IRC01:03
*** gus has quit IRC01:07
*** gus has joined #openstack-rally01:08
*** psuriset has quit IRC01:32
*** threestrands has joined #openstack-rally02:27
*** yamamoto has joined #openstack-rally02:37
*** ilbot3 has quit IRC02:45
*** pzchen has quit IRC02:54
*** pzchen has joined #openstack-rally02:55
*** Liuqing has joined #openstack-rally02:56
*** Liuqing has quit IRC02:57
*** ilbot3 has joined #openstack-rally02:58
*** Liuqing has joined #openstack-rally03:06
*** r-daneel has joined #openstack-rally03:10
*** r-daneel has quit IRC03:15
*** maeca has joined #openstack-rally03:16
*** ihrachys has quit IRC03:19
*** psuriset has joined #openstack-rally03:21
*** Liuqing has quit IRC03:40
*** dave-mccowan has quit IRC03:48
*** lpetrut_ has joined #openstack-rally04:09
*** harlowja has quit IRC04:12
*** lpetrut_ has quit IRC04:31
*** lpetrut_ has joined #openstack-rally04:40
*** harlowja has joined #openstack-rally04:41
*** lpetrut__ has joined #openstack-rally04:41
*** lpetrut_ has quit IRC04:44
*** maeca has quit IRC04:49
*** lpetrut__ has quit IRC04:55
*** lpetrut__ has joined #openstack-rally05:23
*** psuriset has quit IRC05:28
*** psuriset has joined #openstack-rally05:45
*** threestrands has quit IRC05:48
*** lpetrut__ has quit IRC05:50
*** harlowja has quit IRC06:12
*** psuriset has quit IRC06:15
rallydev-bot[From Gitter] cxhuawei : yes it is . What do we want to achieve next?  Cleanup resource belong to one trace-id ?06:40
rallydev-bot[From Gitter] cxhuawei : @andreykurilin06:40
rallydev-bot[From Gitter] cxhuawei : As Boris said , we will provide capability of cleaning resource of one task and cleaning resource of all tasks . Am I right ? @andreykurilin06:43
*** lpetrut__ has joined #openstack-rally06:51
*** e0ne has joined #openstack-rally07:07
*** rcernin has quit IRC07:23
*** pcaruana has joined #openstack-rally07:47
*** itlinux has joined #openstack-rally08:30
*** itlinux has quit IRC08:53
*** pcaruana has quit IRC09:19
*** pcaruana has joined #openstack-rally09:19
*** ushkalim has joined #openstack-rally09:35
*** alexchadin has joined #openstack-rally10:14
*** tosky has joined #openstack-rally10:27
*** pzchen has quit IRC10:56
*** aSY0DAD08 has joined #openstack-rally11:08
*** rallydev-bot has quit IRC11:11
*** juhak has joined #openstack-rally11:15
*** psuriset has joined #openstack-rally11:43
*** alexchadin has quit IRC11:50
*** alexchadin has joined #openstack-rally11:51
*** dave-mccowan has joined #openstack-rally12:14
*** mvk has quit IRC12:22
*** maeca has joined #openstack-rally12:59
*** mvk has joined #openstack-rally13:04
*** serlex has joined #openstack-rally13:08
*** alexchadin has quit IRC13:12
*** alexchadin has joined #openstack-rally13:21
*** juhak has left #openstack-rally13:23
*** yamamoto has quit IRC13:55
*** alexchadin has quit IRC14:02
*** yamamoto has joined #openstack-rally14:17
*** vaidy has quit IRC14:37
*** itlinux has joined #openstack-rally14:46
*** r-daneel has joined #openstack-rally14:47
*** maeca has quit IRC14:47
*** itlinux has quit IRC14:55
*** itlinux has joined #openstack-rally14:59
*** itlinux has quit IRC15:00
*** itlinux has joined #openstack-rally15:03
*** psuriset has quit IRC15:18
*** lpetrut__ has quit IRC15:18
*** itlinux has quit IRC15:36
*** ihrachys has joined #openstack-rally16:03
ihrachysandreykurilin, hey!16:03
ihrachysI have some questions about writing a proper rally scenario, specifically about cleanup contexts16:04
*** e0ne has quit IRC16:22
*** pcaruana has quit IRC16:45
andreykurilinhi ihrachys16:57
ihrachysandreykurilin, hey.16:58
ihrachysandreykurilin, i have a scenario that pre-creates a network with subnets with a context; then scenario itself deletes all subnets in random order.16:58
ihrachysI execute 10 of those concurrently but it probably doesn't matter16:59
ihrachysI handle NotFound properly since they clash on removing same subnets16:59
andreykurilinihrachys: feel free to ask any questions and I will try to answer everything. or we can make a call16:59
ihrachysso far so good16:59
andreykurilinok16:59
ihrachysbut when context tries to clean up network, it fails because subnets are gone16:59
ihrachysI tried to try: except NotFound: in delete_network but it doesn't seem to help (I think it's the "atomic" event that fails deeper in stack)17:00
ihrachysso, question is: is it even desirable / possible to delete context resources in scenario bodies?17:00
andreykurilin1) Do you use standart "networks" context ? 2) Can you share trace?17:01
ihrachysyes I use standard context17:01
andreykurilinSo actually, noone shared such use case before. lol. but anyway failure in the cleanup method of context doesn't affect task results.17:03
ihrachysI think it does17:03
ihrachyssadly I reworked the scenario a bit, moving it to rally tree, and I don't have immediate way to run the code just now, and no trace.17:04
ihrachysbut afaik there was no trace17:04
ihrachysit was just rally logging not found when cleanup triggers17:04
ihrachysbut in report, it was delete_subnet x10 event at success rate of 0%17:04
ihrachysit seems task doesn't carry logs, just results17:05
andreykurilinok. It looks like I undedrstand what had happened17:06
andreykurilin*understood17:06
ihrachysbut here is the report: https://pastebin.com/ZYP1h2ek17:06
andreykurilinok ok17:06
andreykurilingive me a sec for explanation)17:06
andreykurilinDo you have something like http://xsnippet.org/363718/ ?17:07
andreykurilinin your scenario17:08
ihrachysyep17:08
ihrachysI should prolly post the patch to have smth to peek at17:08
openstackgerritIhar Hrachyshka proposed openstack/rally master: Neutron scenario to delete subnets for the same network concurrently  https://review.openstack.org/54004117:09
ihrachysandreykurilin, ^17:09
ihrachysscenario here: https://review.openstack.org/#/c/540041/1/rally/plugins/openstack/scenarios/neutron/network.py17:09
*** mvk has quit IRC17:09
*** harlowja has joined #openstack-rally17:09
andreykurilinok, here are several mistakes17:11
*** lpetrut has joined #openstack-rally17:11
andreykurilinfirst of all, let me explain how rally calculates the timings. There is special decorator which wraps methods like _create_subnet, _delete_subnet and other actions. It simply saves started_at and finished_at timings.  Also, it marks action as failed if the error had happened. It doesn't matter if you catch exception in the parent method or not, if it failed - it is marked as failed, no mocking results17:14
*** serlex has quit IRC17:15
andreykurilinso possible, for your case, the custom _delete_subnet should be used. But let's return to the idea of your testing before doing custom things:)17:16
andreykurilinihrachys: ^17:16
andreykurilinDo you want to test just deleting subnets in concurrency way? or you want to delete the SIMILAR subnets in conccurency way?17:17
ihrachyswhat do you mean similar?17:17
andreykurilinIn current code, each iteration (thread) iterates over the similar list of networks/subnets and calls to removing them17:18
ihrachysthe goal is to have delete requests for subnets that belong to the same network going in parallel17:19
ihrachysbecause then they can clash deallocating the same addresses in ipam layer17:20
ihrachyscurrent scenarios create a network per thread17:20
ihrachyswhich doesn't trigger the issue I try to trigger17:20
ihrachysandreykurilin, ^17:26
ihrachysthat's why I needed network context, to reuse same network.17:27
andreykurilinihrachys: sorry, need to check something for my boss. will return to your question soon17:27
ihrachysok17:28
*** maeca has joined #openstack-rally17:37
*** e0ne has joined #openstack-rally18:14
*** yamamoto has quit IRC18:17
*** openstackgerrit has quit IRC18:18
*** harlowja has quit IRC18:29
*** psuriset has joined #openstack-rally18:38
*** tosky has quit IRC18:38
*** e0ne has quit IRC18:49
*** lpetrut has quit IRC19:02
*** tosky has joined #openstack-rally19:04
*** harlowja has joined #openstack-rally19:09
*** harlowja_ has joined #openstack-rally19:11
*** harlowja has quit IRC19:14
*** yamamoto has joined #openstack-rally19:18
*** yamamoto has quit IRC19:29
*** maeca has quit IRC19:57
*** maeca has joined #openstack-rally20:50
*** maeca has quit IRC20:50
*** maeca has joined #openstack-rally21:00
*** Leo_m has joined #openstack-rally21:06
Leo_mhi I'm new to rally and wondering if concurrency is limited to the users or not at all21:07
andreykurilinLeo_m: hi! it is not limited at all. the one user can be used in several iterations21:29
andreykurilinihrachys: What time zone are you in?21:30
ihrachysCalifornia21:30
andreykurilingot it21:31
andreykurilinihrachys: returning to your case. Some explanation about how rally works: There is users context which is default for all openstack scenarios. It creates temporary users and tenants. Each iteration of workload picks a single users from the pool (https://github.com/openstack/rally/blob/master/rally/plugins/openstack/scenario.py#L67-L89). This user is accessible via self.clients in the `run` method of scenario.21:37
andreykurilinIf the concurrency is bigger then a number of users, the same user will be used in several threads.21:37
andreykurilinLeo_m: this is detailed answer for your question as well ^21:38
Leo_mok cool21:39
Leo_mso each iteration will pick a single user and use multithreading base on concurrency? or an iteration can pick multiple tenants/users?21:39
andreykurilinihrachys: By configuring users context like `tenants: 3, users: 100`, you can garantee than in case of concurency > 3, several parallel requests to the single tenant, network will be performed21:40
andreykurilinI mean that each iteration should not try to delete everything21:40
*** maeca has quit IRC21:40
andreykurilinLeo_m: if the scenario is designed to perform simple actions, each iteration will pick a single user. But if you want, each iteration has access to all users and you can design your plugin to check accessibility/visibility of resource created by user A in user B space21:42
ihrachysandreykurilin, if it shouldn't try to delete everything, what should it delete then?21:43
Leo_mok cool21:43
Leo_mthx!21:43
Leo_mandreykurilin: so if I have 100 times and concurrency 5, in the boot server ex. this means that I will have 100 iterations and in each iteration 5 servers will boot in parallel?21:47
andreykurilinihrachys: just remove one subnet in iteration. something like - http://xsnippet.org/363719/21:50
ihrachysandreykurilin, ok I was missing this "iteration" thingy21:50
ihrachysandreykurilin, that won't help me with the issue of cleanup failing because of missing subnet though21:51
ihrachysI could probably instead leave network empty, then create / delete in each run()21:51
andreykurilinihrachys: so if the number of networks per tenant is 1 and there is not so many tenants are created by users context, several parallel iterations will try to remove resource from one network21:51
ihrachysbut that dilutes the scenario a bit since I want to parallelize deletes only21:52
ihrachysandreykurilin, ok but what about cleanup21:52
andreykurilinihrachys: just ignore erros of context cleanup :)21:53
andreykurilinhttps://github.com/openstack/rally/blob/master/rally/plugins/openstack/context/network/networks.py#L107-L10921:53
andreykurilinhttps://github.com/openstack/rally/blob/master/rally/common/logging.py#L152-L16121:53
andreykurilin1) it doesn't affect results, just logging21:54
andreykurilin2) the failures in your results relate to errors in iterations (several iterations tries to remove the same subnet and catch doesn't help)21:55
*** maeca has joined #openstack-rally21:55
ihrachysandreykurilin, oh you mean that regardless I catch it, it's still registered as failure by that atomic thingy?21:55
andreykurilinyes21:56
ihrachysok makes sense21:56
ihrachysthanks! "iteration" is helpful.21:56
ihrachysI couldn't find it in docs21:56
ihrachysI guess I should have just dir()ed the context object...21:56
andreykurilin"networks" context is quite old and cleanup is done in a bad way there. It definitely should be rewritten (I hope that I'll find a volonteer) to not iterate over a list of created resources in setup method, but list real existing resources and filter them. Otherwise, it is not just blocker for your case, but it is blocker for "disaster cleanup" feature (kill rally process at the middle of execution and then try to remove all resources21:58
andreykurilinafter task) which is almost ready21:58
andreykurilindocs... our plan is to release Rally 0.11.0 (this month) and switch to updating them.21:59
*** threestrands has joined #openstack-rally21:59
*** threestrands has quit IRC22:00
*** threestrands has joined #openstack-rally22:00
andreykurilinLeo_m: times means "total number of iterations". if you setup 100 times and 5 concurrency, it doesn't mean 500 created servers, it is only 100 iterations22:00
Leo_mohhh got it! 5 concurrency means that 5 iterations can take place in parallel22:02
andreykurilinyup22:02
Leo_mbtw have "start_cidr": "192.168.74.0/24"22:03
Leo_mand see a subnet with 192.168.75.0/2422:03
andreykurilinlol22:03
andreykurilinit is ok actually, as far as I remember22:03
Leo_mI do see the 192.168.74.0/24 subnet too22:03
Leo_mguessing that more subnets can be created if the IPs are still reserved/used22:04
Leo_mbtw, the run still created a project and users and I had put in the config existing users22:05
Leo_mguessing I'm missing something22:05
*** mvk has joined #openstack-rally22:07
Leo_m"type": "ExistingCloud", in the config was throwing an error22:07
Leo_m{22:08
Leo_m    "openstack": { “type”: “ExistingCloud”,22:08
Leo_m                           “auth_url”: ………22:08
andreykurilinihrachys: several more things that you should know. in my code snippet, I use self.context["iteration"] as identificator to whch resource should be removed (to avoid collision and removing the similar resource). In your case, the iteration number is a bad id, the better id is the number of used users in one tenant.22:08
andreykurilinit should sound simplier with example.22:08
andreykurilinThere is 2 tenants(A and B) with a user in each. First iteration will be launched with user A1, the second iteration will use B1. Despite the fact that iteration number is 2, the second iteration is executed in a tenant which was not used before and it should pick the first subnet in a list to remove.22:09
andreykurilinYou need to calculate the proper number something like `number = self.context["tenant"]["users"].index(self.context["user"])`22:09
andreykurilinand specify `"user_choice_method" : "round_robin"` field for users context. So each iteration will pick not random user, but in a strict order22:11
andreykurilinLeo_m: `btw, the run still created a project and users and I had put in the config existing users`. if you have "users" context in your task, rally will try to create a new users since it is what this context is designed for22:13
ihrachysandreykurilin, wow. rally seems quite elaborate :)22:14
ihrachysgreat, I have something to chew on now22:14
andreykurilinLeo_m: about deployment config. Is it old config? migrated?22:14
ihrachysandreykurilin, isn't it quite late for you?22:14
andreykurilinihrachys : unfortunately yes, but I glad to help anytime if I'm not sleeping, lol22:16
Leo_mandreykurilin: not sure about the config, I know that removing the type from it worked.22:16
Leo_mI do have in the tasks the users in the context, guessing I need to remove that22:17
Leo_mso that no new users are created and the ones in the config are used22:17
andreykurilinLeo_m: Is it a new configuration or you had existing rally installation and after switching to new version, you performed a db migration and after then the existing deployment became invalid?22:17
Leo_mnew one22:18
andreykurilinok, so we removed type field because it accepted just one value22:18
andreykurilinlol22:18
Leo_mlol ok22:19
*** dave-mccowan has quit IRC22:30
*** maeca has quit IRC23:18
*** openstackgerrit has joined #openstack-rally23:22
openstackgerritMerged openstack/rally master: Endpoint_type of private is not valid  https://review.openstack.org/53928023:22
*** r-daneel has quit IRC23:35
*** Leo_m has quit IRC23:35

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!