Friday, 2019-08-23

*** dtruong has quit IRC00:07
*** dtruong has joined #heat00:07
*** asalkeld has joined #heat00:19
*** asalkeld has quit IRC00:21
*** asalkeld_ has joined #heat00:21
*** asalkeld__ has joined #heat00:27
*** asalkeld_ has quit IRC00:30
*** asalkeld has joined #heat00:37
*** asalkeld__ has quit IRC00:39
openstackgerritRico Lin proposed openstack/heat-agents master: Skip testing docker cmd hook tests until story 2006430 fixed  https://review.opendev.org/67811801:30
ricolinzaneb, ^^^01:31
ricolinstevebaker, how's the current plan for paunch to support docker?01:31
ricolinstevebaker, we got some issue with paunch blocking heat-agents gate jobs https://storyboard.openstack.org/#!/story/200643001:32
stevebakerricolin: I don't know what the planned timing is for docker removal from paunch, it would probably be best to ask mwhahaha or EmilienM in #tripleo. Docker isn't available in centos-8, hence the switch to podman. As for the heat-agents job breakage, I doubt that was deliberate. (I see there were some reverts proposed yesterday https://review.opendev.org/#/q/topic:revert/bug/1833081 )01:46
ricolinstevebaker, thx for the info01:48
mwhahahaDocker should still be supported until we've switched to centos 801:48
mwhahahaNot sure what broke but we'll fix it01:48
mwhahaha(tomorrow probably)01:49
ricolinmwhahaha, stevebaker I just proposed a patch to support backward compatibility in paunch. Feel free to check https://review.opendev.org/#/c/678117/01:49
mwhahahaK01:50
ricolinmwhahaha, heat-agents broken because we using a testing cmd and feed it in to paunch.apply, since workflow in paunch change to use podman in default, the test cmd can't actually run anymore01:51
ricolinAnd that's when I notice the deprecation of docker in paunch01:51
ricolinI think zaneb also provide some comment about that in https://review.opendev.org/#/c/678086 too01:52
mwhahahaI'm not sure we should keep that hook. But we'll figure it out I guess01:53
*** asalkeld has quit IRC01:53
openstackgerritRico Lin proposed openstack/heat-agents master: Skip testing docker cmd hook tests until story 2006430 fixed  https://review.opendev.org/67811802:21
zanebmwhahaha: it seems to me the real problem here is that Heat is relying on a part of TripleO (an inversion of the usual dependencies). and while TripleO doesn't care about supporting anything but CentOS, that isn't the case for Heat02:27
zanebI don't know what the solution to that looks like though02:27
zanebstevebaker: would it be possible to test paunch on Ubuntu or some other platform that has docker?02:28
stevebakerzaneb: not really. The unit tests will run anywhere, and the only functional test coverage is tripleo ci. I think it is reasonable for heat to ask the paunch maintainers to keep docker support as long as there are users of that heat-agents hook02:35
zanebI can't imagine there ever not being users of that hook at this point02:37
stevebakerricolin: zaneb let me try a fix in heat-agents02:41
zanebbe my guest ;)02:41
zanebI wish we had co-gating. every paunch release breaks heat-agents02:42
ricolinstevebaker, super!02:42
ricolinwhat zaneb just said is true;/02:42
stevebakerzaneb: ricolin +1 for co-gating, yes02:59
*** maddtux has joined #heat03:02
ricolinstevebaker, thx let me know how's your fix works:)03:04
stevebakerricolin: instead of setting the command to /path/to/config-tool-fake.py, set it to 'docker' and make sure the PATH is set so 'docker' is a copy of config-tool-fake.py03:05
ricolininteresting03:07
*** ramishra has joined #heat03:17
*** gkadam has joined #heat03:19
stevebakerricolin: actually what it really needed is a rewrite of that test ;) the tests shouldn't be concerned with the internals of paunch and what docker calls are made, and this test will keep breaking as long as it carries on. I'm going to try mocking out all of paunch, and rewriting the test to make sure apply is called with the expected arguments. Is it ok if I do this first thing Monday?03:48
openstackgerritRabi Mishra proposed openstack/heat master: Use connect_retries when creating clients  https://review.opendev.org/67803903:56
openstackgerritRabi Mishra proposed openstack/heat master: Ensure _static exists with placeholder  https://review.opendev.org/67813703:56
ricolinstevebaker, sure:)04:08
ramishraricolin: docs job seems broken looking for _static, not sure why all of a sudden, something changed with sphinx  probably,  ^^ fixes it04:12
*** ricolin has quit IRC04:14
*** ricolin has joined #heat04:21
ramishraricolin: not sure if you saw my earlier message about fixing the gate as you got disconnected after that...04:31
ricolinramishra, I didn't received it;/04:40
ramishraricolin: ok https://review.opendev.org/67813704:42
ricolinramishra, approved04:50
ricolinthx04:50
*** skramaja has joined #heat04:55
*** ricolin has quit IRC05:02
*** ricolin has joined #heat05:03
openstackgerritMerged openstack/heat master: Ensure _static exists with placeholder  https://review.opendev.org/67813705:13
*** e0ne has joined #heat05:33
*** e0ne has quit IRC05:44
*** skramaja has quit IRC06:08
*** skramaja has joined #heat06:11
*** deepak_mourya_ has quit IRC06:35
*** deepak_mourya_ has joined #heat06:35
*** e0ne has joined #heat06:42
*** gkadam has quit IRC06:43
*** e0ne has quit IRC06:43
*** e0ne has joined #heat06:53
*** e0ne has quit IRC06:59
*** jawad_axd has joined #heat07:07
*** ricolin has quit IRC07:15
*** rcernin has quit IRC07:15
*** gfidente|afk is now known as gfidente07:20
*** jtomasek has joined #heat07:29
*** maddtux has quit IRC07:38
*** maddtux has joined #heat07:39
*** jtomasek has quit IRC07:42
*** e0ne has joined #heat08:05
*** k_mouza has joined #heat08:18
*** ivve has joined #heat08:26
ivvehey there, i have an issue with heat not accepting a template that looks like this.. https://hastebin.com/qugajevoyu.http in the bottom creating the cloudconfig im trying to fetch the address of the server which is the cause of the error. is there any proper way of getting this information?08:33
ivve{ get_attr: [ port_master01, fixed_ips, value ] } ?08:34
ivvemy bad08:35
ivve{ get_attr: [ port_master01, fixed_ips, ip_address, value ] } ?08:35
ivve{ get_attr: [ port_master01, fixed_ips, 0, ip_address ] }09:38
*** rmulugu has joined #heat09:52
*** hjensas has quit IRC10:00
rmuluguHi, During heat stack update execution .. the stack was failed with error "resources.volumexxx: volume in use"10:02
rmuluguhow can we check what are the reasons for recreation of instances10:03
openstackgerritRabi Mishra proposed openstack/heat master: Add retries when loading keystone data and fetching endpoints  https://review.opendev.org/67819310:19
*** k_mouza has quit IRC11:05
*** k_mouza_ has joined #heat11:05
*** hjensas has joined #heat11:11
*** k_mouza_ has quit IRC11:15
*** k_mouza has joined #heat11:15
*** skramaja has quit IRC11:16
*** k_mouza has quit IRC11:20
*** rmulugu has quit IRC11:25
*** k_mouza has joined #heat12:12
*** hjensas has quit IRC12:27
*** maddtux has quit IRC12:46
openstackgerritRabi Mishra proposed openstack/heat master: Add retries when loading keystone data and fetching endpoints  https://review.opendev.org/67819312:54
*** ivve has quit IRC13:24
*** bnemec has joined #heat13:34
*** bnemec is now known as beekneemech13:35
*** jawad_axd has quit IRC13:49
*** jawad_axd has joined #heat13:53
*** jawad_axd has quit IRC13:58
*** ramishra has quit IRC14:01
*** jawad_axd has joined #heat14:46
*** jawad_axd has quit IRC14:51
*** usr2033 has quit IRC14:54
openstackgerritMerged openstack/heat master: Use connect_retries when creating clients  https://review.opendev.org/67803915:09
*** jawad_axd has joined #heat15:11
*** jawad_axd has quit IRC15:16
*** k_mouza_ has joined #heat15:51
*** k_mouza_ has quit IRC15:53
*** k_mouza has quit IRC15:54
*** k_mouza has joined #heat15:54
gregworkim noticing that doing a heat stack delete on stacks with octavia load balancers occasionally fail where the load balancer provisioning state goes into a "Pending Update" mode16:02
gregworkthis causes my overall stack delete to get stuck in "DELETE_IN_PROGRESS"16:03
gregworkpretty much all day16:03
gregworkis there a way to stop the hung stack delete16:03
openstackgerritRabi Mishra proposed openstack/heat master: Add retries when loading keystone data and fetching endpoints  https://review.opendev.org/67819316:06
*** beekneemech has quit IRC16:07
*** bnemec has joined #heat16:20
*** k_mouza has quit IRC16:44
*** bnemec has quit IRC16:53
gregworkin queens you also cant resource-signal a resource pending delete17:19
gregworkeither so this is just hard stuck17:19
zanebgregwork: delete the LB directly from Octavia18:39
gregworkcant its been set immutable by heat18:40
gregworki get 409 errors touching it18:40
zanebthat's a thing?18:40
gregworki know right18:40
zanebok, just read the API page and this might be a question for Octavia developers18:43
*** e0ne has quit IRC18:46
*** k_mouza has joined #heat19:06
*** bnemec has joined #heat19:10
*** bnemec is now known as beekneemech19:11
gregworkzaneb: do you know if heat kill -9's vs -15's the octavia controller process (was a question from johnsom in #openstack-lbaas19:11
zanebheat doesn't touch the octavia process19:14
zanebgregwork: ^19:14
*** johnsom has joined #heat19:14
gregworkodd19:15
gregworkan theres johnsom19:15
johnsomHi19:15
johnsomI'm not familiar with the heat code gregwork mentioned in our channel.19:16
*** ash2307 has joined #heat19:16
gregworkopenshift-ansible creates LB resources for itself in openstack by calling OS::Octavia::LoadBalancer / Listener / Pool / Pool member19:17
gregworkthe issue we are seeing is when we delete that stack the load balancer gets hung19:17
gregworkhence my inquries in this and the openstack-lbaas channel :)19:18
johnsomYeah, as I mentioned in our channel, the only way it would get "hung" is if something kill -9 the Octavia controller process instead of a -15 graceful shutdown.19:18
gregworkand zaneb mentioned heat doesnt touch octavia ?19:18
johnsomWhere is the code for this "OS::Octavia::LoadBalancer "?19:18
zanebjohnsom: https://opendev.org/openstack/heat/src/branch/master/heat/engine/resources/openstack/octavia/loadbalancer.py19:20
zanebactually gregwork is on queens, so https://opendev.org/openstack/heat/src/branch/master/heat/engine/resources/openstack/octavia/loadbalancer.py19:20
zanebhttps://opendev.org/openstack/heat/src/branch/stable/queens/heat/engine/resources/openstack/octavia/loadbalancer.py I mean19:20
johnsomzaneb This stuff only uses the Octavia API right? It doesn't start/stop service processes right?19:21
zanebcorrect19:21
johnsomYeah, so heat cannot be causing a resource to be stuck in a "PENDING_*" state19:21
zanebyou'd hope not :)19:22
*** ivve has joined #heat20:03
gregworkheat stack-delete openshift-cluster is returning: DELETE_FAILED  Resource DELETE failed: JSONDecodeError: resources.masters.resources[1].resources.api_lb_member: Expecting value: line 1 column21:19
gregwork1 (char 0)21:19
gregworki havent seen JSONDecodeError as a reason for failing before21:19
gregworkany suggestions on where to look / whats going on21:19
gregworkopenstack  stack resource show nested-stack-name api_lb_member is the one with JSONDecodeError: resources.api_lb_member: Expectin value: line 1 column 1 (char 0)21:24
gregworki guess thats kind of the same thing heh21:24
gregworkfrom resource OS::Octavia::PoolMember21:24
gregworkhttps://pastebin.com/xgkxd1c621:32
*** rcernin has joined #heat22:11
*** ivve has quit IRC22:17
*** rcernin has quit IRC23:11
*** rcernin has joined #heat23:12
*** beekneemech is now known as keanu23:26
*** keanu is now known as beekneemech23:27
*** k_mouza has quit IRC23:51
*** k_mouza has joined #heat23:51
*** k_mouza has quit IRC23:56

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!