Thursday, 2020-03-05

*** hyunsikyang__ has quit IRC01:53
*** hyunsikyang has joined #openstack-fenix01:53
*** tojuvone has quit IRC03:05
*** tojuvone has joined #openstack-fenix03:06
*** tojuvone has quit IRC03:26
*** tojuvone has joined #openstack-fenix03:26
hyunsikyangHi tojuvon, when you have a time, please let me know:)04:28
*** tojuvone has quit IRC05:15
*** tojuvone has joined #openstack-fenix05:16
tojuvoneHi hyunsikyang, Now I am here. Yesterday I had a standalone K8S modified "Doctor VNFM" running to test new workflow. Now more or less just missing Fenix K8S workflow modifications for the AODH alarming to test the whole. Today I need to prepare "Fenix with ETSI FEAT03" presentation in OPNFV Closed Loop WG.05:43
hyunsikyangOh good. when it start? is it regular meeting?05:44
tojuvoneIf arranged: Bi-weekly Thursdays at 7AM to 8AM Pacific Standard Time.05:49
tojuvoneI could try to make these last bits to test this new "K8S workflow POC" and share the code and how to test. Will be simpler as no Doctor needed. Just need DevStack to bring Fenix, AODH, Keystone (and providing demo user/project used in testing) and then K8S cluster with 3 worker nodes.06:02
hyunsikyanggood:)06:02
hyunsikyangIf I can't join that meeting, could you share the slide for me?06:03
tojuvoneI think it makes discussing all that further easier ;)06:03
hyunsikyangyes. I think so.06:03
tojuvoneYes, the meeting must be at bad time. I will surely share that material.06:04
hyunsikyangyes. midnight for me kk06:04
hyunsikyangthanks06:04
tojuvoneAlso not sure if anybody able to join (out of OPNFV), but this is link to meeting if that is possible https://zoom.us/j/501462778506:05
hyunsikyangI have a account for that. last time I joined OPNFV infra WG meeting06:05
hyunsikyang)06:05
tojuvoneNow I should make the material :o06:06
tojuvoneI have some power point, but not happy with it and it does not mention K8S yet06:06
hyunsikyangcould you give me a 5 min?06:06
tojuvoneyes, sure06:06
hyunsikyangAbout the tacker integration, after alarming,06:07
tojuvoneyes06:08
hyunsikyangfenix start maintenance using default workflow. right? such as maintenance or in scale ...06:08
hyunsikyangAnd tacker-plunin received these message and forward or act something and return the result to fenix.06:09
hyunsikyangact something with tacker and return.06:09
tojuvoneThere is currently default and vnf workflow. Yes, that is about what should be there in Tacker side.06:11
tojuvoneProblem might be that what is the relation to VNF to make those actions and reply06:12
tojuvoneEasier with ETSI FEAT03, as you could at least have some VNFD to define statically the behavior06:13
hyunsikyanghttps://review.opendev.org/#/c/643242/17/specs/train/vnf-rolling-upgrade.rst06:14
hyunsikyangso with this spec, I just want to know what is the scope of this implementation.06:14
hyunsikyangwe have a two procedure for fenix in this spec.06:14
hyunsikyangSo Do we develop real tacker-plugin function to manage the all of the message from fenix.06:15
hyunsikyang?06:15
hyunsikyangor06:16
hyunsikyangtacker-plugin just can reply response message to fenix, when it received message such as PREPARE_MAINTENANCE, ADMIN_ACTION_DONE-06:16
tojuvonelooking the spec..06:16
hyunsikyangBut06:17
hyunsikyangif you are busy today,06:17
hyunsikyangyou don't need to use a time for it now.06:17
hyunsikyangI am just asking ..:)06:18
hyunsikyangnow we developed code until alarming and return the message to fenix.06:18
hyunsikyangto working this procedure, and show the real maintance, we make that code too. but, some message does not supported by tacker now.. so06:19
tojuvoneyes, so maybe this alarming this could be finalized if it works and then other aptch set to deal with those actions and reply06:19
hyunsikyangI just want to discuss with you.06:19
tojuvoneso the "second" workflow in spec might be something to start with if scaling is not that easy to make.06:20
tojuvonePREPARE_MAINTENANCE and PLANNED_MAINTENANCE are very similar. Those could be implemented with ACK to Fenix06:22
hyunsikyangyes. So, one of the good way is that we can make a real demo with this. even the procedure is so simple.06:22
tojuvoneEvery time Fenix needs to make migration/live migration and it is done, there will be ADMIN_ACTION_DONE. You do not reply to this06:23
hyunsikyangCould you let me know where is the definition of each message? It needs to make a function06:23
hyunsikyangyes thesething should be clear..06:23
tojuvoneThen you surely need to process MAINTENANCE and MAINTENANCE_COMPLETE with ACK to Fenix06:24
tojuvoneLet's see the Fenix documentation...06:25
hyunsikyangsorry to bother you..06:25
hyunsikyang:(I know you are busy06:25
tojuvoneNo problem.. I gather here some links and let's see if that is enough to define "each message"06:27
tojuvonehttps://fenix.readthedocs.io/en/latest/user/baseworkflow.html06:27
tojuvonehttps://fenix.readthedocs.io/en/latest/user/notifications.html#project06:28
tojuvoneSo each notification/event will have the same message, but not all those variables. There is no specification for single state message content. Only what is described in that document06:30
tojuvoneAlso for the VNFM reply, there is only this: https://fenix.readthedocs.io/en/latest/api-ref/v1/index.html#id606:32
tojuvoneso the " PUT /v1/maintenance/{session_id}/{project_id}/"06:32
tojuvoneInformation is there, but not see easy to handle as it is not by each state (MAINTENANCE, PLANNED_MAINTENANCE...)06:34
tojuvoneOne straight forward thing might be to run the Doctor test and look the Doctor app_manager/sample.py code to see that content06:35
tojuvoneRunning the Doctor test will print each message/event06:36
tojuvonecode is here: https://github.com/opnfv/doctor/blob/master/doctor_tests/app_manager/sample.py06:36
hyunsikyangokie. I will look at it.06:37
tojuvoneMaybe can find some log... I think I shared some of that, or Doctor CI run might have it06:37
hyunsikyangbut tacker use a heat-stack so we should modify the code.06:38
tojuvoneSurely best thing would be the document, but do nto have that06:38
tojuvoneDoctor also make those actions relevant to heat stack via heat (scaling actions), but thing like Switch-over is not done via that. Don't know if even possible / feasible06:40
hyunsikyangas a perspective of takcer, we can finish the code until alarming. And then we can make a real scenario with procedure two in the spec. simple thing06:40
hyunsikyangI see. we need more discuss about it.06:41
tojuvoneyes, I think that sound like a good plan06:41
hyunsikyangwe will try to finish it first. And think the real demo!06:42
hyunsikyangreally thank you for today discussion:)06:42
tojuvonehyunsikyang: Thank you for making all this :) That's great!06:45
tojuvoneOld Doctor CI results seems to be wiped out.. But I might have some saved elsewhere..06:46
tojuvoneHere is some logs with the new VNFM, but they should give you mostly the message contents for each state..06:54
tojuvoneOPNFV Doctor test run: http://paste.openstack.org/show/787774/06:54
tojuvoneFenix logging for Doctor test: http://paste.openstack.org/show/787775/06:54
tojuvoneIn first one you can see what kind of message the VNFM receives (VNFM received data) and what it does and then reply (VNFM reply)06:56
tojuvoneon the second link with Fenix log you can find similar (Sending "maintenance.planned" to project). For reply there is not the message shown, just project specifically if it is gotten (498677d1-e9e2-4930-ac05-965aa8d30858 in: ACK_PLANNED_MAINTENANCE). NOTE! As this is with new VNFM and nfv workflow, things are mostly just different in a way that each instance has separate message so they can be handler parallel. Meaning 'instance_ids' list only has07:08
tojuvone one instance always and not all the instances in current compute. Do not have to care about that. Also in log you can see some TIMEOUT AND migration retry as this env has not been the most stable one for AODH alarm delivery and alsoas  Nova is pretty bad in doing many migrations in short time window to same compute hosts.07:08
tojuvoneThat is what I can find for now, just ask anytime if some doubt and I will reply when online :)07:10
*** tojuvone has quit IRC08:06
*** tojuvone has joined #openstack-fenix08:06
*** tojuvone has quit IRC08:18
*** tojuvone has joined #openstack-fenix08:18
*** tojuvone has quit IRC08:30
*** tojuvone has joined #openstack-fenix08:30
*** tojuvone has quit IRC08:55
*** tojuvone has joined #openstack-fenix08:55
*** tojuvone has quit IRC09:41
*** tojuvone has joined #openstack-fenix09:41
*** openstackstatus has quit IRC09:45
*** tojuvone has quit IRC10:22
*** tojuvone has joined #openstack-fenix10:22
*** tojuvone has quit IRC10:23
*** tojuvone has joined #openstack-fenix10:24
*** tojuvone has quit IRC10:36
*** tojuvone has joined #openstack-fenix10:37
*** tojuvone has quit IRC10:48
*** tojuvone has joined #openstack-fenix10:48
*** tojuvone has quit IRC10:51
*** tojuvone has joined #openstack-fenix10:51
*** tojuvone has quit IRC10:58
*** tojuvone has joined #openstack-fenix10:59
*** tojuvone has quit IRC12:17
*** tojuvone has joined #openstack-fenix12:18
*** tojuvone has quit IRC12:55
*** tojuvone has joined #openstack-fenix12:55
*** tojuvone has quit IRC13:04
*** tojuvone has joined #openstack-fenix13:04
*** tojuvone has quit IRC13:45
*** tojuvone has joined #openstack-fenix13:46
*** tojuvone has quit IRC13:47
*** tojuvone has joined #openstack-fenix13:48
*** tojuvone has quit IRC14:09
*** tojuvone has joined #openstack-fenix14:09
*** tojuvone has quit IRC14:24
*** tojuvone has joined #openstack-fenix14:25
*** tojuvone has quit IRC14:31
*** tojuvone has joined #openstack-fenix14:32
*** tojuvone has quit IRC14:39
*** tojuvone has joined #openstack-fenix14:39
*** tojuvone has quit IRC14:43
*** tojuvone has joined #openstack-fenix14:44
*** tojuvone has quit IRC15:05
*** tojuvone has joined #openstack-fenix15:05
*** tojuvone has quit IRC15:16
*** tojuvone has joined #openstack-fenix15:17
*** tojuvone has quit IRC16:27
*** tojuvone has joined #openstack-fenix16:27

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!