14:00:29 #startmeeting sahara 14:00:30 Meeting started Thu Nov 23 14:00:29 2017 UTC and is due to finish in 60 minutes. The chair is tellesnobrega. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:32 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:34 The meeting name has been set to 'sahara' 14:00:51 o/ 14:01:57 waiting for show of hands 14:01:58 o/ 14:02:44 * tosky did not expect jeremyfreudberg around 14:02:51 jeremyfreudberg: happy thanksgiving! 14:03:03 * tellesnobrega didn't expect him either 14:03:07 happy thanksgiving 14:03:36 tosky, tellesnobrega, thanks! it's still early here so i can squeeze it in before festivities 14:03:41 jeremyfreudberg is probably working up an appetite for a big turkey later 14:03:47 indeed 14:04:01 lets start 14:04:05 #topic News/Updates 14:04:45 sooo, the status of the zuul v3 migration is a bit stuck, but not bad 14:05:32 there is a relevant change that I'm waiting for before merging the sahara-tests jobs (if you are interested zuul._projects renamed as zuul.projects, dict and not list anymore) 14:06:13 the other blockers are a) lack of support for multinode devstack jobs (infra didn't work on it yet) 14:06:17 b) missing grenade job (but the old one can be used) 14:07:11 c) undergoing refactoring of the way jobs publishes the results (I should reask), it blocks the refactoring of sahara-extra jobs (and the change to publish oozie) 14:07:14 and that's it for now 14:08:40 I'm currently working on two main things. MapR image generation and decommission of an specific node. The first one I'm hitting an issue to install openjdk-8, if fails on the check for some reason. The second, it is not as simple as expected, I need to update the heat stack created by us (services.heat.ClusterStack), I will probably need to add an option to give a list of instances, right now it just receives the amount of instances o 14:08:40 n each node group 14:09:28 not much new to report from me, I haven't had time to revise the S3 job binary patch yet or fix the thing where s3_hadoop in SIE is breaking stuff 14:09:31 I'll get to it soon... 14:09:38 tellesnobrega, can you explain the openjdk thing? 14:09:38 SotK, the decommission on the heat side always deletes the last instance created 14:10:47 sure, basically I'm trying to deploy openjdk with package: java-1.8.0-openjdk-devel 14:11:08 just like I did for CDH, but it fails during the rpm -q java-1.8.0-openjdk-devel 14:11:25 I'm not sure why. The error that shows up is runtime error sh: 14:11:29 that is it :( 14:11:40 SotK, I'm trying to install with a script 14:11:41 wut 14:12:12 that's the time when I start putting print() around if I'm too lazy to go with a debugger 14:13:11 I did, I even started a python shell and launched guestfs with the image to test it 14:13:55 rpm -q really fails, but it installs with yum -y install, and after that rpm -q works. I'm not sure why it fails on the first time 14:14:25 do other rpm commands work? (other options besides -q, -q with other packages) maybe your image is corrupted or something? not much to go on 14:14:48 jeremyfreudberg, I do install a lot of packages 14:15:26 they all work on the same way, it tests if it is install with rpm -q and if it is not installed it runs yum -y install 14:15:45 very strange 14:16:33 there are others packages that don't install, but I took them out for now. I'm working on the most important ones now and see if works and I will put the others back 14:17:33 Do any of you have had experience with heat stacks? 14:18:24 tellesnobrega, a little bit. 14:18:53 jeremyfreudberg, does it make sense what I said about adding instance list? 14:19:06 I wonder how tripleo manages the addition/removal of nodes 14:19:55 tosky, hmm, I could ask around to see how they do that, we can work the same logic 14:22:02 tellesnobrega, i think it makes sense 14:22:35 btw, tellesnobrega, I might be looking in wrong place, but for your image packing issue, doesn't stderr not get shown (only stdout) https://github.com/openstack/sahara/blob/7bf5ed49bb2a16bf36b9fc54fa78bc28f5d85ffb/sahara/cli/image_pack/api.py#L88 14:22:55 i might be in the wrong part of the code there 14:23:45 I have to make sure that the template change won't break the way cluster stack works now 14:24:58 jeremyfreudberg, that is only for getting the proper output message. It goes into exception there and it should raise the message, but it is empty. But yeah, I can try adding stderr there to see if anything else comes up 14:25:17 when I did manually the message was the same, RuntimeException sh: 14:27:22 oh well, I will keep digging on those two. tosky thanks for the tripleo idea, I will take a look 14:29:13 #topic Open Discussion 14:30:08 there are few reviews for the stable branches: one is a old backport for ocata, two are new fixes for tox_install.sh 14:30:21 I guess I asked all my questions during news/updates 14:30:32 tosky, I will take care of them 14:31:09 I might have mentioned this before but there is SAHARA_AUTO_IP_ALLOCATION_ENABLED for dashboard config, this exists because of some nova network detail, *however* it also has an accidental Neutron use case 14:31:10 namely, 14:31:29 that if you have use_floating_ips=False it's a very easy way to hide that field from the user in dashboard 14:31:49 I'm reminded of this because of recent patch to remove use_neutron 14:32:03 Sorry for the late. I am in business travel this two weeks 14:32:19 My question is, does it seem like a legitimate use case? And is it ok to keep the misleading name? 14:32:24 hi shuyingya 14:32:50 Hi. Sorry for the interruption 14:33:07 Please go on 14:34:25 thanks, i was basically done with my point. But to put it again succinctly, SAHARA_AUTO_IP_ALLOCATION_ENABLED dashboard config is for nova-network, but actually is kindof useful for neutron too in a kind of accidental way. Is that ok / should we rename parameter? 14:35:20 jeremyfreudberg, best way to go would be to make the property more clear 14:35:51 misleading property name are always a source of trouble 14:36:36 do you have a better name suggestion? 14:37:06 at most we can still use SAHARA_AUTO_IP_ALLOCATION_ENABLED, if set, to set new variable, and then kill it in some future version 14:37:29 tosky, right, I guess it's not such an atomic option as I paint it to be 14:37:31 I think this name doesn’t relate to nova or neutron 14:37:47 It is ok to still use it 14:38:10 shuyingya, my problem was that "Auto IP" is a nova-network concept, I think 14:38:13 but I could be wrong 14:38:40 not sure either 14:38:42 this is the official help text for the option, btw, we should at least update that https://github.com/openstack/horizon/blob/9adb63643778a779c571b4898b315b582bf8fba8/openstack_dashboard/local/local_settings.py.example#L791 14:38:51 doc update as well 14:39:54 or actually, I guess it's a bit confusing because "Auto IP" can mean Sahara auto ip (`use_floating_ips`), or Nova-net `auto_assign_floating_ip` 14:40:12 it is a bit confusing 14:40:46 I agree with a transition to less confusing names 14:40:57 or we will have the same talk every 6 months :) 14:41:14 true 14:41:24 I am confusing too. If it is related to the concept of nova or neutron, renaming it would be helpful 14:41:43 so, I propose to keep this config option alive, and rename it to something like SAHARA_FLOATING_IPS_ENABLED or something like that 14:42:15 sounds good 14:42:36 and we will follow tosky advice about deprecate the old option but not remove it quite yet 14:42:47 yeah 14:42:57 yep 14:43:01 great 14:43:07 thanks jeremyfreudberg 14:43:24 shuyingya, do you have any update? 14:43:29 I would like to update my recently job 14:43:31 Yep 14:45:02 I am in business travel in headquarters to implement deploy sahara service in container 14:45:38 cool 14:45:39 Maybe I can update sahara charts in openstack-helm project 14:45:47 how is that going? 14:46:20 First, build sahara service image by Lola project 14:47:06 And then use openstack-helm project to form the helm template for kuberntes helm 14:48:08 I'm not following too much the container stuff; I know that TripleO uses the containers from Kolla, are openstack-helm and Lola different? 14:51:08 shuyingya: maybe you missed my last message: how are openstack-helm and Lola related to Kolla, whose images are used by TripleO ? 14:51:11 shuyingya, did you see tosky's question? 14:51:23 Didn’t investigate triple O project now. But it seems same way to implement it 14:51:26 Yes 14:52:05 I am on the way to go back hotel 14:52:11 shuyingya, cool. It seems like it is about the same thing done in two projects 14:52:39 Sorry. I will update the detail next week 14:52:47 no worries, thanks shuyingya 14:52:52 thanks shuyingya 14:53:05 You are welcome 14:54:24 we are 6 minutes away, do we have any other discussion topics for today? 14:54:34 yes 14:54:37 one more (quick?) thing 14:54:45 about apiv2, actually 14:54:46 go ahead 14:54:52 Too many excited things want to share with you 14:55:10 We can share them next week 14:55:24 :) 14:55:28 :) 14:55:36 v1 and v1.1 had project id in endpoint url in catalog, but v2 does not 14:55:42 so i don't think we can use the same service type 14:55:55 this is for service discovery, I mean 14:56:18 hm 14:57:11 not sure what best practice is here 14:57:17 I don't really know, it requires some investigation (if other projects did it) 14:57:24 with a smart enough client anything is possible 14:57:41 I would like to investigate nova implementations first 14:58:12 shuyingya, do you want to investigate that and update us next week? 14:58:30 Sure 14:58:33 mistral and cinder both got new workflowv2 and volumev2 at some point, we should find out what their motiviation for that was 14:58:51 Mistral copied cinder :) 14:59:09 d0ugal, thx 14:59:15 thank d0ugal, lets find out cinder's motivation 14:59:17 but it was a mistake and shouldn't be copied by anyone else AFAIK 14:59:18 Thanks 14:59:36 even better information d0ugal, thanks 14:59:44 d0ugal, thx, again 14:59:50 d0ugal: so v3 is still volumev2? I should know that I forgot 14:59:55 we can check keystonev3 if they changed anything 15:00:14 keystone is probably the special case that I wouldn't look at 15:00:18 tosky: I'm not sure - I am not that familiar with cinder. I work on Mistral. 15:00:20 yep 15:00:27 btw, our meeting time is over 15:00:53 it is 15:01:16 thanks all 15:01:33 see you all next week 15:01:46 #endmeeting