Friday, 2019-04-12

*** dangtrinhnt has quit IRC01:45
*** dangtrinhnt has joined #openstack-fenix01:49
*** tojuvone has quit IRC02:28
*** tojuvone has joined #openstack-fenix02:28
hyunsikyangHello tojuvone and dangtrinnhnt,04:02
hyunsikyangI have some question.04:02
tojuvonehi hyunsikyang, yes?04:05
hyunsikyangHi04:05
hyunsikyangI see your comment. Thanks:)04:06
tojuvonenp :)04:07
hyunsikyangBTW, you mean that we don't need to separate the case? I just want to emphasize the what is the trigger point.( Host or VNF it self.)04:09
tojuvoneInfra admin calls maintenance on infrastructure. There is interaction with VNF(M) becasue it needs to minimize downtime04:11
tojuvoneIt just happens to be that with this capability, VNF could make its own upgrade at the same04:11
tojuvoneas VMs are to be "moved" anyhow and Fenix can tell about new infra capabilites like new HW or SW in infra, that VNF can take into use04:12
tojuvoneSo the idea is to make infra maintenance/upgrade04:13
tojuvonebut if it happens to be so, it is affecting to all VMs in VNF, why not make the VNF upgrade at the same04:13
tojuvoneits a win win situation in VNF point of view04:15
tojuvoneso the difference between cases is just if you have / add empty capacity as compute nodes04:17
tojuvoneor you need to scale down VNF04:17
tojuvoneThoughts?04:19
hyunsikyangI aggree.04:20
hyunsikyangBut, when only VNF want to upgrade itself, we don't need scale out or moving VNF. right?04:21
hyunsikyangah It needs. When resource is not enough or VNf provide sverice constantly.04:22
hyunsikyangRight. coud you explane about it more? 'but if it happens to be so, it is affecting to all VMs in VNF, why not make the VNF upgrade at the same'04:23
tojuvoneok, explaining...04:24
hyunsikyangYou mean that one VNF consists of several VM and one of the VM needs upgrade?04:24
tojuvoneIf you would plan to upgrade VNF while it keeps on running...04:25
tojuvoneyou would do that VM by VM (VNF consist of several VMs)04:25
tojuvonenow, if you have rolling infrastructure maintenance/upgrade, it is going compute by compute...04:26
tojuvonemeaning in VNF point of view "VM by VM"04:26
tojuvoneso upgrading same time as infra would make it so that you do not go through same operation "twice" in VNF side04:28
tojuvone"twice", would be that you would upgrade VNF later04:29
tojuvonenot the same time the infra is upgraded04:29
tojuvonesurely that if far more safe thing to do04:29
tojuvonein case infra upgrade would fail04:29
tojuvoneAnyway...04:31
hyunsikyanggive me a sec.04:31
tojuvoneif infra upgrade should fail, it should be known before any compute node is upgraded04:31
tojuvonefirst one upgrades controllers and then one compute. It is anyhow tested before given back to VNF use04:32
tojuvoneif this will fail, infra can be rolled back before it affects to VNF04:32
tojuvoneupgrading computes should already be much safer thing to do and for crucial things there should be pre upgrade checks in place in OpenStack services04:33
tojuvoneok04:33
hyunsikyangI see. In the blue print, I wanted to emphasize that procedure will be different since who trigger Maintenance.04:45
hyunsikyangbut you also want to say that VNF upgrade also possible at the same time.04:45
hyunsikyangIn the blueprint, upgrading in the second case is just one of the example when VNF maintenance is needed without starting from Host.04:46
hyunsikyangOk I will change it according to your comment.04:48
tojuvoneYes. It is another approach if VNF would just like to upgrade itself without infra being upgraded04:56
hyunsikyangAnd Do we need to write a blueprint for adding planned.maintenance type at aodh? or just commit?05:01
tojuvoneFor aodh, I think it is just about how it is configured / deployed05:02
tojuvoneFenix installation guide shoudl say, one need to confiugure aodh to be able to have event alarm from planned.maintenance05:03
tojuvoneto me, it  doesn't change aodh by any means otherwise05:03
hyunsikyangSo we just mention it in the doc..05:06
tojuvoneyes, I think so05:07
hyunsikyangI thought we need to commit the code to AODH.05:07
tojuvoneAODH is generic code to have event alarm from notifications05:08
tojuvoneso it just needs configuration of which notifications05:08
*** tojuvone has quit IRC05:13
*** tojuvone has joined #openstack-fenix05:14
hyunsikyanggot it :) Thanks05:15
*** tojuvone has quit IRC05:24
*** tojuvone has joined #openstack-fenix05:31
*** tojuvone has quit IRC05:42
*** tojuvone has joined #openstack-fenix05:42
*** tojuvone has quit IRC06:54
*** tojuvone has joined #openstack-fenix06:54
*** tojuvone has quit IRC07:58
*** tojuvone has joined #openstack-fenix07:58
*** tojuvone has quit IRC08:11
*** tojuvone has joined #openstack-fenix08:11
*** tojuvone has quit IRC08:15
*** tojuvone has joined #openstack-fenix08:16
*** tojuvone has quit IRC08:23
*** tojuvone has joined #openstack-fenix08:24
*** tojuvone has quit IRC08:47
*** tojuvone has joined #openstack-fenix08:47
*** tojuvone has quit IRC08:56
*** tojuvone has joined #openstack-fenix08:56
*** tojuvone has quit IRC09:02
*** tojuvone has joined #openstack-fenix09:02
*** tojuvone has quit IRC09:04
*** tojuvone has joined #openstack-fenix09:05
*** tojuvone has quit IRC09:13
*** tojuvone has joined #openstack-fenix09:13
tojuvoneTrying to describe how to model ETSI FEAT03 requirements09:23
tojuvoneHave to have some idea before PTG09:23
*** tojuvone has quit IRC09:49
*** tojuvone has joined #openstack-fenix09:50
*** tojuvone has quit IRC09:52
*** tojuvone has joined #openstack-fenix09:52
*** tojuvone has quit IRC10:11
*** tojuvone has joined #openstack-fenix10:11
*** tojuvone has quit IRC10:13
*** tojuvone has joined #openstack-fenix10:13
*** tojuvone has quit IRC11:57
*** tojuvone has joined #openstack-fenix11:58
*** tojuvone has quit IRC11:58
*** tojuvone has joined #openstack-fenix11:59
*** tojuvone has quit IRC12:10
*** tojuvone has joined #openstack-fenix12:10
*** tojuvone has quit IRC13:09
*** tojuvone has joined #openstack-fenix13:10
*** tojuvone has quit IRC16:18
*** tojuvone has joined #openstack-fenix16:18

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!