Tuesday, 2015-09-15

*** sripriya has quit IRC00:41
openstackgerritSantosh Kodicherla proposed stackforge/tacker: Add tacker functional tests  https://review.openstack.org/22232901:03
*** santoshkumark has quit IRC01:26
*** lhcheng_ has joined #tacker01:45
*** lhcheng has quit IRC01:45
*** s3wong has quit IRC01:59
*** changzhi has joined #tacker02:26
*** sridhar_ram1 has quit IRC03:00
*** sridhar_ram has joined #tacker03:09
*** sridhar_ram has quit IRC03:33
*** sarob has quit IRC03:40
*** lhcheng_ has quit IRC03:49
*** lhcheng has joined #tacker03:49
*** sridhar_ram has joined #tacker03:56
*** sridhar_ram has quit IRC03:57
*** sarob has joined #tacker04:03
*** lhcheng has quit IRC04:30
*** tbh has joined #tacker05:03
*** santoshkumark has joined #tacker05:35
*** lhcheng has joined #tacker06:18
*** lhcheng has quit IRC06:23
*** lhcheng has joined #tacker06:29
*** tbh has quit IRC06:33
*** tbh has joined #tacker07:05
*** tbh has quit IRC07:12
*** zeih has joined #tacker07:13
*** santoshkumark has quit IRC07:23
*** tbh has joined #tacker07:32
*** lhcheng has quit IRC07:44
*** tbh has quit IRC09:01
*** tbh has joined #tacker09:14
*** changzhi has quit IRC11:54
*** tbh has quit IRC12:48
*** zeih has quit IRC13:16
*** zeih has joined #tacker13:18
*** zeih has quit IRC13:19
*** trozet has quit IRC13:22
*** trozet has joined #tacker14:01
*** sridhar_ram has joined #tacker14:01
*** sridhar_ram has quit IRC14:03
*** bobh has joined #tacker14:08
*** tbh has joined #tacker15:42
tbhbobh,  Hi15:42
tbhbobh, I just came to my home15:42
tbhbobh, I will setup my dev env  now*15:43
tbhbobh, can't we see the heat stack when I create vnf?15:43
*** bobh has quit IRC15:43
*** sripriya has joined #tacker15:49
lsp42stupid question time...how do I review a patchset? I can guess, but would rather know the method this group uses15:57
sripriyalsp42: we follow the same procedure as rest of the openstack projects. if you are convinced with a patchset, you can leave a +1 on the patchset. if you have questions/comments on the patchset, you can leave a -1 with your comments. also, if you would like to test the patchset in your own tacker environment, you can pull the review via 'git review -d <gerrit_review_no.> and test the changes. hope this answers.16:04
lsp42sripriya: Yes! Thank you.16:04
tbhsripriya, how you set the basic setup work?16:11
tbhsripriya, what vnf you generally create for dev work16:11
*** lhcheng has joined #tacker16:26
*** prashantD has joined #tacker16:52
sripriyatbh: i usually work with cirros vnf most of the time16:53
tbhsripriya, can't we see the heat stack, when we create vnf?16:53
sripriyatbh: heat stack is the one we spawn in the background when we create vnf16:54
sripriyawhich internally spawns a vm in nova/compute16:54
tbhsripriya, but I can't see anything in heat stack list  and no vm in nova list16:55
tbhbut I can see in virsh list16:55
sripriyaal the heat stacks are created in the service tenant16:55
tbhsripriya, oh got it16:55
tbhthanks16:56
sripriyayou can login as a 'tacker' user to see the stack created in service tenant or as an admin  under Admin->Instances16:56
sripriyatbh: cool16:56
tbhsripriya, even as an admin I can't see the instances16:57
tbhoh admin-instances, I saw it project -> instances of admin tenant16:57
sripriyatbh: yes16:57
tbhsripriya, I forgot, I did a mistake in this review https://review.openstack.org/#/c/222715/, sorry for that17:01
tbhsripriya, I moved correct file, but changed path in wrong section17:01
*** bobh has joined #tacker17:01
sripriyatbh: that is fine, you need not be sorry :-) please go ahead with a follow up patchset under the same bug and clearly state the commit message., that should be good...17:05
tbhsripriya, waiting for this17:06
*** s3wong has joined #tacker17:09
bobhtbh: Hi - the heat stack shows up under the "service" account, at least until we start passing through the tenant credentials17:10
*** santoshkumark has joined #tacker17:13
tbhbobh, oh okay17:14
tbhbobh, I am still setting up env in my home laptop17:14
*** santoshkumark has quit IRC17:17
openstackgerritbharaththiruveedula proposed stackforge/tacker: Moves NOOP driver from tests folder to tacker/vm  https://review.openstack.org/22370417:20
*** sridhar_ram has joined #tacker17:21
lsp42Noob question #2 of the day. When it comes to downloading a patchset, do I just leave the state of DevStack running from the .stack.sh script?17:23
lsp42so stack.sh, download patch set via OpenStack methodology, tweak and observe changes?17:24
bobhlsp42: depends on if it's a client or server patch17:28
bobhlsp42: if it's a client patch, yes that's pretty much it17:28
lsp42and if it's a server patch?17:28
bobhlsp42: if it's a server patch you need to go into the screen session window for the server you are patching and ctrl-c to kill the server17:28
bobhlsp42: then up-arrow to recall the server command and hit return to restart it17:29
lsp42bobh: Ah, ok. That's not so bad :)17:29
sridhar_ramlsp42: yeah, be careful .. devstack will spoil you ;-)17:29
lsp42sridhar_ram: Don't joke about that!17:30
* lsp42 List of things Tacker related is growing17:32
tbhbobh, I think it is better to have monitor for each vdu17:34
tbhlet say I want to deploy IMS vnf17:35
tbhit has many vdus, I want to monitor each VDU with a different monitoring driver and its params17:36
tbhso in this case it can be same/ different monitoring driver but definitely different params17:36
tbhso it is better we will go with monitor for each VDU17:36
tbhwhat do you say?17:37
bobhtbh: I think it's an architecture/design question - one person might want to write a single monitor driver that does all of the individual VDU monitoring itself, while another person might want the building blocks17:41
bobhtbh: I think I'm leaning toward the individual VDU but that means more changes in the server17:41
bobhtbh: it shouldn't be that hard, and there is always the possibility of doing a "VNF monitor" later that would cover all of the VDUs17:42
tbhbobh, if we follow individual VDU driver, then the user can write one driver, and mention the same for all the VDUs17:43
tbhbobh, why many changes, I think we are not even storing the monitoring info into db17:44
bobhtbh: right, which could mean a lot of duplication in the template17:44
tbhbobh,  duplication occurs only if we they want to use same driver, but that's not the case every time17:45
bobhtbh: Today the monitoring thread uses the Device table (I think) which has the monitor policy stored as an attribute of the VNF17:45
tbhbobh,  it is like writing heat template to launch 5 vms of same image, even here we mention same image, but that doesn't duplication I guess17:45
bobhtbh: so the change is to store the monitoring data in a list of maps probably with the VDU names as the keys17:46
sridhar_rambobh: tbh: this is something we discussed in the mid-cycle .. and for now we called out to have mon-driver to be per-VDU specific17:46
bobhtbh: true, I just like to avoid duplication.  There are ways around it in TOSCA so not a big deal17:46
bobhsridhar_ram: thanks - I didn't recall the discussion17:47
tbhsridhar_ram, bobh oh okay17:47
sridhar_ramgiven we are early in this space .. don't know the operator's usage pattern so per-VDU will give enough deployment options..17:47
tbhsridhar_ram, that's true17:48
sridhar_ramalso .. even code wise .. we need mgmt-ip of each VDU to probe for health17:48
bobhso driver will receive the mgmt IP of the VDU along with whatever parameters are specified17:49
sridhar_ramthat only catch is the respawn logic .. which I agree w/ Bob is kinds misplaced, where it respawsn the whole VNF17:49
sridhar_rambobh: yes17:49
bobhpre-Kilo it was all that was available anyway17:50
bobhbut clearly an area for improvement17:50
lsp42Is there any movement on the whole ve-vnfm-vnf ref point? So for now everything is in band. Is there a way to use oslo further down the line and do some RPC?17:50
*** prashantD has quit IRC17:51
sridhar_ramyeah... for near-term, to bound the scope of this effort, we can keep monitoring per-VDU but the respawn action is for the whole VNF17:52
sridhar_ramsorry, that was bobh:17:52
lsp42:)17:53
bobhlsp42: I haven't seen anything in that area - I think it's probably a way down the road yet17:53
sridhar_ramlsp42: well, your point is quite valid .. we did briefly discuss this in Tacker.. to have one or more dedicated "health-mon" agents that will be scheduled for a given monitoring task17:54
sridhar_ram... along the lines of neutron l3-agent17:54
lsp42sridhar_ram: Yeah, I guess this is a wider industry challenge.17:54
sridhar_rambobh: agree, this is something we need to take up down the line17:54
lsp42sridhar_ram: Not any vendor or project is going to want to run an 'agent' on a VNF17:55
sridhar_ramlsp42: the solution shd be scaling, but we are taking baby steps :)17:55
lsp42sridhar_ram: so consensus is needed from somewhere to twist arms17:55
lsp42sridhar_ram: Yes! Baby steps lead somewhere though :)17:56
bobhlsp42: Agents have been an issue for a long time - people want the functionality that an agent provides but they don't want the overhead/security risks of an agent17:56
sridhar_ramlsp42: that's a slightly different subject - (vnf) agentless vs vnf agent-based montoring...17:56
lsp42sridhar_ram: Tacker could be the perfect project to cultivate industry consensus on this matter17:57
sridhar_ramwhat I'm talking here is .. loading the tacker's mgmt-driver by the health-mon agent instead of directly by the tacker-server which is how it is right now17:57
lsp42sridhar_ram: It's an argument based on the reference point argument. Different parts of the industry push the blame for lack of progress to the other. Agentless vs agent based...NETCONF vs something new. Same old issues going round and round! With the work bobh and tbh are doing, I guess at least Tacker is in a position to react when the reference is sturdied up17:58
lsp42sridhar_ram: Ok, I think I understand18:00
sridhar_ramlsp42: totally, we sure will provide a point of view which will be based on something deployed and running ;-)18:00
sridhar_ramlsp42: it is more a scalability issue for tacker-service18:00
lsp42sridhar_ram: I get you now. Sorry. Totally misunderstood to start with :)18:01
lsp42sridhar_ram: So it would use something like oslo to talk to the health-mon agent? Would the agent be responsible for reporting what drivers were available?18:04
sridhar_ramlsp42: np at all on the misunderstanding.. there are some more interesting aspects in this problem space to indulge. biggest challenge is to pace ourselves .. with the folks showing up here w/ the "coding" shovels :)18:04
lsp42sridhar_ram: Yes. I'm like an excited shiny thing chaser. Before long, hopefully I will also start actually coding and mending bugs. Or at least that's the plan.18:05
lsp42hence all the questions around dev env18:05
sridhar_ramlsp42: yes, we could use oslo_rpc to talk between tacker-server and future tacker health-mon agent... health-mon agent will run the periodic mon-driver and report back only selected events back to tacker-server to react for any healing action18:06
sridhar_ramlsp42: totally, there is no other way around :)18:06
bobhI was thinking lsp42 meant RPC to the VNF - that was a bit more challenging :-)18:07
lsp42bobh: Yikes! Heh18:07
sridhar_rambobh: that would be agent-based monitoring18:07
bobhsridhar_ram: That's why I was confused :-)18:07
sridhar_raminterestingly .. there was existing code in tacker until couple of weeks back that actually "tried" to do that..18:08
sridhar_ramI throw them away recently !18:08
sridhar_ram*threw18:08
lsp42sridhar_ram, bobh: Sorry for the confusion :-)18:09
tbhsridhar_ram, any reason why you avoid agent based monitoring18:09
tbh?18:09
sridhar_ramfyi, here is the removal code for future reference .. https://review.openstack.org/#/c/215377/18:09
bobhtbh: Convincing vendors to allow you to install the agent is a big challenge18:10
tbhbobh, yeah that's true :)18:10
sridhar_ramtbh: we sure can consider that if there is a request from operators..but 100% agree w/ bobh18:11
bobhtbh: we used an agent-based system for several years, based on an early Cloudify product.  It worked pretty well but lots of cases where it wouldn't work at all18:11
bobhlots of vendors give you a closed image that you can't modify - that presents a number of problems18:12
sridhar_ramtbh: also the original tacker code used a model where the RPCs gets proxied all the way into VNF ... that's slippery slope for multiple reason including security risk18:13
lsp42bobh: It's a pain and with such a loose reference point between the vnfm and vnf, I think you're left with very little choice other than to come up with a case of logic conditions based on existing technology, like icmp echo, SNMP, NETCONF or possibly REST if the VNF supports it. It means more maintenance though ultimately. The industry as a 'team' has fallen short here massively18:13
sridhar_ramyou can always achieve something similar if you can write a custom tacker mon-driver that will probe *your* VNF in custom ways to achieve the same thing18:14
bobhlsp42: Yep - there are 20 different ways to accomplish the same thing and no consensus, so nothing happens18:14
tbhbobh, sridhar_ram got it18:14
bobhlsp42: Sometimes I miss Bellcore....18:15
lsp42bobh: LOL18:15
sridhar_rambobh: LOL18:15
bobhat least they gave you a standard to work from - sometimes a lousy standard, but it beats no standard at all18:15
sridhar_ramwell.. I was in last IETF #93 .. guess what, most of the WG are doing NETCONF work for each of their functional areas18:16
lsp42bobh: I hear you for sure18:16
lsp42sridhar_ram: no surprise there at all18:16
bobhsridhar_ram: The TOSCA guy wouldn't be too happy about that18:16
sridhar_rambobh: so, don't loose hope.. something might actually happen around NETCONF!18:16
sridhar_rambobh: no, IMO they are orthogonal.. we are still chasing TOSCA for all the descriptors but NETCONF is better suited for deep configuration of most standards based VNFs18:17
bobhsridhar_ram: I agree completely, but there seems to be some tension at the working group level18:18
sridhar_rambobh: that's true18:18
tbhsridhar_ram, in my previous work also, we use NETCONF to configure VNFs18:19
bobhgiven the installed base of NETCONF products I don't see how anyone can think it will be replaced by TOSCA18:19
sridhar_ramtbh: that's cool, makes perfect sense...18:19
sridhar_rambobh: IMO, as things stand now, it doesn't make sense to consider TOSCA for that..18:20
sridhar_rambobh: tbh: btw, back to our near term job ...18:20
sridhar_rambobh: tbh: what you folks thing about my earlier thought one per-VDU monitoring but per-VNF action ?18:20
sridhar_ramwould that work and make things simple for your folks to implement ?18:20
bobhsridhar_ram: seems like the best way forward for now18:21
bobhsridhar_ram: it will need rework in the future but that's a given anyway18:21
tbhI don't remember the exact link, there they mentioned the difference between nova VM and VNF is  vm can be configured through ssh or something else, but VNF through netconf :)18:21
tbhsridhar_ram, sure18:21
tbhsridhar_ram, yeah one mon driver per VDU will do18:22
bobhsridhar_ram: Am I correct in my understanding that the "device" structure that is passed to the monitoring thread is the same as the device and deviceattributes tables in the database?18:22
sridhar_rambobh: agree, it will be un-even for now.. but atleast in the correct path..18:22
tbhsridhar_ram, checking the code changes18:22
* sridhar_ram walking to my pycharm window18:22
bobhlol18:22
lsp42sridhar_ram: You being serious?18:24
* lsp42 Can't quite figure it out...18:24
* sridhar_ram ... walking back from the code dungeon I've in my basement18:25
lsp42long walk18:27
openstackgerritSantosh Kodicherla proposed stackforge/tacker: Add tacker functional tests  https://review.openstack.org/22232918:27
tbhbobh, are we storing mon drivers info into db?18:27
bobhthat's what I'm trying to figure out - my development environment is having issues so I haven't been able to check how it's actually working18:28
sridhar_rambobh: your understanding is correct.... though it is instance_id added to device_dict18:28
bobhsridhar_ram: Thanks - so we probably need to either modify the database or just push more data under attributes - that would be easier than modifying the database18:28
*** sripriya_ has joined #tacker20:03
*** sridhar_ram1 has joined #tacker20:03
*** santoshkumark has joined #tacker20:03
*** sridhar_ram has quit IRC20:04
*** sripriya has quit IRC20:06
*** santoshkumark has quit IRC20:45
*** sridhar_ram1 has quit IRC20:55
*** sripriya_ has quit IRC20:59
*** sridhar_ram has joined #tacker21:03
*** sripriya has joined #tacker21:03
*** bobh has quit IRC21:12
*** sripriya has quit IRC21:22
*** sripriya has joined #tacker21:41
*** trozet has quit IRC21:47
*** lhcheng_ has joined #tacker22:04
*** lhcheng has quit IRC22:07
sridhar_ramfolks - any one to review https://review.openstack.org/#/c/222739/  (dead code removal) ?22:09
sripriyasridhar_ram: i will be reviewing it within this week (2-3 days)22:13
sridhar_ramsripriya: thanks!22:13
*** lhcheng_ has quit IRC22:37
*** lhcheng has joined #tacker22:40
*** tbh has quit IRC23:03
*** openstackgerrit has quit IRC23:16
*** openstackgerrit has joined #tacker23:16

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!