08:01:47 #startmeeting Mistral 08:01:49 Meeting started Wed Jul 24 08:01:47 2019 UTC and is due to finish in 60 minutes. The chair is rakhmerov. Information about MeetBot at http://wiki.debian.org/MeetBot. 08:01:50 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 08:01:52 The meeting name has been set to 'mistral' 08:02:15 I'd like to discuss this https://review.opendev.org/#/c/670211/ 08:02:40 folks, eyalb1 is my colleague from Nokia who's now the Vitrage PTL and he's also going to help with Mistral 08:02:43 just so you know 08:02:50 vgvoleg: sure, go ahead 08:03:34 I don't understand why Andras didn't like this :D 08:04:09 let me see 08:04:30 This is just an opportunity to implement actions and to write a wf which could be run as usual or in dry-run mode 08:05:18 This feature could be compared with ansible's check mode 08:06:42 well, I've not reviewed the patch itself so it's a bit difficult to discuss it 08:06:45 looking.. 08:07:30 So if you want to run the flow with dry-run mode, you should be sure that actions from this flow support this 08:10:09 hm.. ok 08:10:17 so 08:10:30 I kind of understand his point 08:11:03 because yes, in a real workflow data is critically important for all the transitions etc. 08:11:43 but at the same time I see no harm in having this feature merged 08:11:50 Yes, I understand this 08:12:02 at least we could see if it works in some very simple configuration 08:12:12 its simplest paths 08:12:36 It would be great to adapt std actions for this 08:12:43 so what your patch does is it makes it possible to run "test" in actions instead of "run"? 08:12:47 is this it? 08:13:01 yes 08:14:03 and maybe openstack actions 08:14:41 to return stubs in test run 08:14:58 ok 08:15:02 but first of all, this feature is for custom actions 08:15:08 yeah 08:15:31 so, I'm not against it 08:16:05 the only thing I'm concerned with is maybe we need to consider a more sophisticated approach to testing workflows 08:16:25 some kind of framework for mocking workflow steps 08:16:58 but I'm not sure.. because it would involve serious time investements 08:17:37 it's basically "How can we make sure that the workflow works correctly in any possible situation" 08:17:48 sort of unit testing for workflows 08:17:52 but that's complex 08:18:20 we've discussed that idea in the past but realized it'd be very difficult to make it usable enough 08:18:23 not overcomplicated 08:18:53 Morning rakhmerov 08:19:02 apetrich: hey Adriano 08:19:06 How was the vacations? 08:19:06 long time no see 08:19:09 ho's it going? 08:19:21 my vacation was good ) I had good rest 08:19:33 and almost forgot about the work (which is very good) :) 08:19:40 :) 08:20:33 vgvoleg: so Oleg, let us review your patch first 08:20:51 I've already glanced quickly and it looks ok but I still have some questions 08:21:11 vgvoleg: but generally the idea is OK to me 08:22:28 apetrich: btw, I remember you helped investigating a problem with the requirements 08:22:39 aye 08:22:53 if you have a few min could you please look at http://logs.openstack.org/90/672390/4/check/requirements-check/00c6495/job-output.txt.gz ? 08:23:20 we're having hard time to see why the requirement check keeps failing 08:23:25 in stable/stein branch 08:23:25 sure can do right now.. I'm trying a deploynment 08:23:30 in master this patch is OK 08:23:39 apetrich: thanks a lot 08:24:34 rakhmerov, can I have a link to the patch? 08:24:47 yes, sure 08:24:57 https://review.opendev.org/#/c/672390 08:25:45 cheers 08:25:56 :) 08:26:34 I have a question: is it possible to "bind" execution to one engine? 08:28:13 so if I had a small workflow and a lot of engine replicas, it would be great not to have a huge overhead to load this definition to the cache of every engine 08:28:51 vgvoleg: currently no 08:29:20 well, if the workflow is small then there won't be a huge overhead ) 08:29:32 it'll be huge if the workflow is big 08:30:13 I think we've already discussed this with you 08:30:29 it'd be good to see some BP about that 08:30:33 I perform a small svt, results are tangible 08:30:41 svt? 08:30:56 stress test 08:31:07 then share this with us somehow 08:31:23 comparison, what you did etc 08:31:53 I remember we talked about the opportunity to have kind of "ad-hoc" workflows 08:32:00 it's not done, but I can share what's done 08:32:09 that is, we create a workflow and immediately run it 08:32:13 ok 08:32:15 oh what an amazing english skills I have 08:32:22 sorry :D 08:32:23 is that the same thing? 08:32:23 so 08:32:34 vgvoleg: no worries, your english is fine ) 08:32:39 no, but it's all about one case 08:32:46 so the case is 08:32:47 and will become even better over time ;) 08:32:51 ok 08:33:10 what I've just described I believe is doable 08:33:26 we run in the parallel way next scenario: create workflow -> execute workflow -> wait till it's done -> repeat 08:33:35 but that's mostly about workflow definition life-time 08:33:48 not about how we distribute workflow processing 08:33:59 yes, ok 08:34:15 the workflow is small - 8-10 http tasks 08:34:38 ok 08:34:48 I scale users, that performs this scenario 08:34:51 and you want to bind it to one engine 08:34:53 I see 08:35:18 https://docs.google.com/spreadsheets/d/1S26ImGdb3V6TnbkIVyDA85OHGx0Dvbt7o0kX-rWfmDE/edit#gid=0 08:35:37 vgvoleg: the thing is that binding to one engine conflicts with the fundamental idea behind Mistral 08:35:48 that every workflow step must be stored in DB 08:36:05 and the current results (it's not done) shows that scaling engine makes performance worse 08:36:06 so that any other processing unit (engine) could continue with the processing 08:36:21 vgvoleg: of course, it makes it worse 08:36:27 but it makes it more reliable 08:36:43 which is a critically important property of any workflow engine 08:36:51 it's all about scaling 08:36:55 value in google doc describes how many flows could be success i one minute 08:37:04 in one* 08:37:31 I sent a permission request 08:37:46 vgvoleg: that's all understandable, of course :) 08:37:48 done 08:38:11 If I just write a program in Python it'll be even faster 08:38:25 and we won't need any YAML, YAQL and scale ) 08:38:54 :D 08:38:57 yes 08:39:00 but we'll lose the most important property of the system: reliability and scalability 08:39:32 what do you mean "ad-hoc workflow"? 08:39:35 if we talk about just routing messages related to some workflow to one specific engine then it's doable I think, yes 08:39:44 but what if the engine crashes? 08:40:06 yes I understand you 08:40:50 by "ad-hoc" workflow I mean: we do something like "mistral execution-create --definition my_wf.yaml" and Mistral would create an execution based on my_wf.yaml 08:40:58 the main challenge is to adapt mistral to work with one-off workflows as good as with others 08:41:19 yes, that's it 08:41:19 but without creating an object in DB that we could access with "mistral workflow-get" 08:41:28 yep 08:41:37 vgvoleg: that feature is totally OK to me 08:41:53 I see no issues and I think it's not so difficult to implement 08:42:35 ok I'll try and maybe there would be something to discuss in the next meeting :) 08:42:50 but as far as binding a workflow to one engine, well, I have serious doubts. I'll look at this Google Doc though 08:42:55 yes 08:42:56 sure 08:43:03 let's keep discussing it 08:44:51 I understand that binding is a bad idea 08:45:27 probably the "ad-hoc workflow feature" will solve this problem 08:46:24 ok 08:46:49 guys, do you have anything else to discuss? 08:47:22 nope 08:47:27 eyalb1, apetrich: btw, I didn't release T-1 and T-2 because I kind of didn't see a big point 08:47:42 wait 08:47:45 one question 08:47:50 what do you think we need to do moving further? 08:48:28 apetrich, eyalb1: I thought about just releasing T-1 instead of T-3 when the T-3 time comes and then go to RCs 08:48:30 why don't we build and deploy docker image in CI steps? 08:48:52 vgvoleg: aah, Andras knows better but he's on vacation now 08:49:01 I think he'll be here next week 08:49:13 there was a reason for it but I honestly don't remember 08:49:32 nice 08:49:45 we used to have this job but then it was removed 08:50:03 let's ask Andras when he's back 08:51:17 ok 08:52:05 ok, so let's finish the meeting now 08:52:28 apetrich, eyalb1: please reply to the release question when you have a chance 08:52:32 #endmeeting