Kennan | :sdake just come, sorry for have issues during weekend | 00:00 |
---|---|---|
Kennan | I will try the new image if possible today | 00:00 |
sdake__ | its all good - don't expect people to work on the weekends :) | 00:02 |
*** mahito has joined #openstack-containers | 00:05 | |
*** jay-lau-513 has quit IRC | 00:14 | |
*** julim has joined #openstack-containers | 00:17 | |
*** achanda has quit IRC | 00:22 | |
hongbin | sdake: I am back. Sorry for missing your message. I saw you have already created the git review :) | 00:27 |
sdake__ | yar | 00:35 |
sdake__ | if it looks good add a +1 and your testing notes :) | 00:36 |
hongbin | K. I am trying to fix the VM to test it. | 00:37 |
hongbin | sdake__: A bad new. The pod creation at old image seems to work. | 00:39 |
sdake__ | of course it does, you have the docker images on the hard disk | 00:39 |
sdake__ | it doesn't need to fetch them | 00:40 |
hongbin | So the problem is possibly from the new image. | 00:40 |
sdake__ | but you said you couldn't ping 8.8.8.8? | 00:40 |
hongbin | The old image have the docker images? | 00:40 |
hongbin | That is because that region filter ping | 00:40 |
hongbin | I can curl google.ca | 00:40 |
sdake__ | can you curl google.ca in the new image? | 00:40 |
hongbin | Sorry, I just remembered that | 00:41 |
sdake__ | its ok | 00:41 |
sdake__ | I should have mentioned to try to curl | 00:41 |
hongbin | old | 00:41 |
sdake__ | can you try curl in new image | 00:41 |
sdake__ | to see if there is a network probelm or something else going on | 00:41 |
hongbin | I can test the new one again | 00:41 |
sdake__ | wife bought me a xikar torch lighter - so happy :) | 00:44 |
sdake__ | no more matches to light cigars | 00:44 |
*** julim has quit IRC | 00:53 | |
*** achanda has joined #openstack-containers | 00:53 | |
hongbin | sdake__: Good for you. | 00:55 |
hongbin | And the image, I can curl google.ca as well | 00:55 |
sdake__ | cool | 00:55 |
sdake__ | ssh to the node | 00:55 |
sdake__ | minion node | 00:55 |
hongbin | and sudo docker images still shows the empty list | 00:56 |
hongbin | in | 00:56 |
sdake__ | sudo docker pull kollaglue/centos-rdo-mariadb | 00:56 |
hongbin | it seems to work. Downloading | 00:57 |
sdake__ | so docker is working - that is a plus | 00:57 |
sdake__ | wait for it to finish | 00:57 |
hongbin | k | 00:58 |
*** sdake has joined #openstack-containers | 01:00 | |
sdake | open another ssh to the machine | 01:00 |
sdake | sudo df | fpaste | 01:00 |
*** dims has joined #openstack-containers | 01:01 | |
*** dims has quit IRC | 01:01 | |
hongbin | here you go: http://paste.openstack.org/show/195070/ | 01:02 |
sdake | journalctl -xl -u kubelet | 01:03 |
*** sdake__ has quit IRC | 01:04 | |
sdake | | fpaste | 01:04 |
hongbin | it is empty | 01:05 |
sdake | try adding .serviced | 01:05 |
sdake | rather .service | 01:05 |
hongbin | I know why. sudo :) | 01:05 |
sdake | did that docker download finish? | 01:06 |
sdake | pull rather | 01:06 |
hongbin | The kubelet's log: http://paste.openstack.org/show/195073/ | 01:08 |
hongbin | Yes, the docker download finished | 01:09 |
sdake | Mar 23 00:52:41 te-ijfn5ixpajfm-1-rmsje2csxwze-kube-node-apk4zatqugyr.novalocal kubelet[1233]: unknown flag: --api_server | 01:09 |
hongbin | .... | 01:10 |
sdake | its now called --api-servers | 01:12 |
hongbin | yes, saw that | 01:13 |
sdake | and --etcd-servers | 01:13 |
sdake | looks like fedora is busted out of the box | 01:13 |
sdake | yay | 01:13 |
sdake | kill your bay, start a new one | 01:14 |
sdake | lets try to manually get kubectl running | 01:14 |
hongbin | how? | 01:15 |
sdake | we will stop kubectl via systemctl | 01:15 |
sdake | and beat on the config files until they work | 01:15 |
sdake | that way we can file a fedora bug | 01:15 |
*** dims__ has joined #openstack-containers | 01:15 | |
hongbin | K | 01:16 |
*** suro-patz has joined #openstack-containers | 01:18 | |
hongbin | I changed the api-servers | 01:20 |
hongbin | Need to change this as well? $ grep -RI "etcd.servers" /etc/kubernetes/ /etc/kubernetes/config:KUBE_ETCD_SERVERS="--etcd_servers=http://10.0.0.2:4001" | 01:21 |
hongbin | Sorry I type that line by line | 01:21 |
hongbin | $ grep -RI "etcd.servers" /etc/kubernetes/ | 01:21 |
hongbin | /etc/kubernetes/config:KUBE_ETCD_SERVERS="--etcd_servers=http://10.0.0.2:4001" | 01:21 |
sdake | looks like our tempalte handles that already | 01:25 |
sdake | try running systemctl start kubectl | 01:26 |
hongbin | Yes, the template write the config file | 01:26 |
sdake | and paste the logs | 01:26 |
hongbin | Failed to list *api.Pod: Get http://127.0.0.1:8080/api/v1beta1/pods?fields=Status.Host%3D10.0.0.4&namespace=: dial tcp 127.0.0.1:8080: connection refused | 01:27 |
hongbin | many of these error | 01:27 |
sdake | what is the api server set to? | 01:27 |
hongbin | Will paste the full logs | 01:27 |
sdake | paste the config | 01:27 |
hongbin | set to localhost.... | 01:29 |
sdake | which config file | 01:29 |
sdake | and which item? | 01:29 |
hongbin | http://paste.openstack.org/show/195081/ | 01:29 |
sdake | did you put in 127.0.0.1? | 01:30 |
hongbin | No | 01:30 |
hongbin | I can change it | 01:30 |
sdake | yes to the master ip please | 01:31 |
sdake | we need to send a pull request to larsks repo to set that config option | 01:33 |
sdake | it is new in k8s 11 | 01:33 |
hongbin | This is the tail of kubelet log: http://paste.openstack.org/show/195082/ | 01:33 |
*** vilobhmm has quit IRC | 01:34 | |
sdake | did you reset-failed | 01:34 |
sdake | what state is the service in as shown by systemctl? | 01:34 |
hongbin | .... will do that. I used restart | 01:34 |
hongbin | running | 01:34 |
sdake | restart wont restart a failed service | 01:34 |
hongbin | did a reset | 01:36 |
hongbin | The status http://paste.openstack.org/show/195083/ | 01:36 |
*** suro-patz has quit IRC | 01:37 | |
hongbin | I am not sure it is good or bad | 01:38 |
sdake | journalctl -xl ? | 01:39 |
sdake | status doesn't provide the right info | 01:39 |
hongbin | just let me know if you need the full log | 01:39 |
hongbin | k | 01:39 |
hongbin | full log: http://paste.openstack.org/ | 01:40 |
hongbin | sorry | 01:40 |
sdake | i think that isn't the right link | 01:40 |
hongbin | http://paste.openstack.org/show/195084/ | 01:41 |
sdake | systemctl stop kubelet | 01:42 |
sdake | systemctl | grep kubelet | 01:42 |
sdake | (sudo) | 01:43 |
hongbin | wait | 01:43 |
hongbin | The log didn't seem right | 01:43 |
hongbin | let me give you the right log | 01:43 |
hongbin | It looks they cut the tail portion of my log | 01:44 |
sdake | yar | 01:44 |
sdake | its 127.0.0.1 in the logs | 01:45 |
sdake | woudl like to see what it should really be | 01:45 |
hongbin | http://paste.openstack.org/show/195090/ | 01:45 |
hongbin | This is the tail portion | 01:46 |
sdake | try launching a pod | 01:50 |
hongbin | k | 01:50 |
sdake | the google cat said that last warning is no big deal if it restarts the watch | 01:51 |
sdake | whatever that means :) | 01:51 |
sdake | join #google-containers | 01:51 |
Kennan | :sdake, I tested with the new image, from console-log I find | 01:52 |
Kennan | 348.523672] cloud-init[845]: Failed to start docker.socket: Unit docker.socket failed to load: No such file or directory. [ 348.588835] cloud-init[845]: activating service docker [ 354.282315] cloud-init[845]: activating service kubelet [ 354.663861] cloud-init[845]: Created symlink from /etc/systemd/system/multi-user.target.wants/kubelet.service to /usr/lib/systemd/system/kubelet.service. [ 359.857553] cloud-init[ | 01:52 |
Kennan | [ 371.937272] cloud-init[845]: 2015-03-23 01:41:46,262 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-010 [ 372.082919] cloud-init[845]: 2015-03-23 01:41:46,356 - cc_scripts_user.py[WARNING]: Failed to run module scripts-user (scripts iscripts) [ 372.109901] cloud-init[845]: 2015-03-23 01:41:46,384 - util.py[WARNING]: Running scripts-user (<module 'cloudinit.config.cc_scriython2.7/site-package | 01:52 |
*** unicell has quit IRC | 01:53 | |
sdake | hongbin ssh into vm sudo docker images | 01:54 |
sdake | if its working it should be displaying an extra image | 01:54 |
hongbin | yes, the list is not empty now | 01:54 |
sdake | victory | 01:54 |
sdake | so now for the PR to heat-kubernetes | 01:54 |
hongbin | :) | 01:54 |
sdake | hongbin can you work out a pull request | 01:55 |
sdake | i'll merge it | 01:55 |
sdake | while your vm is getting the container rolling | 01:55 |
hongbin | sdake: Yes, I can. | 01:56 |
hongbin | I am thinking one issue | 01:56 |
sdake | i would, but I'm not sure what you changed ;) | 01:56 |
hongbin | The fix possibly won't be backward-compatible? | 01:56 |
hongbin | change from api-server to api-servers | 01:56 |
sdake | nope, and nobody in their right mind should deploy k8s 0.6.0 | 01:56 |
hongbin | k | 01:57 |
hongbin | Then, let me make a pull request | 01:57 |
sdake | although i expect fedora will fix the api-server and change it to servers | 01:57 |
sdake | so try to make the pull request handle both cases correctly | 01:57 |
*** achanda has quit IRC | 01:57 | |
hongbin | k. Will try that | 01:57 |
sdake | if if api-server is found withot the 's' change it to api-servers | 01:58 |
sdake | that way we can make a new image and it should just work ;) | 01:58 |
sdake | when fedora fixes that bug | 01:58 |
Kennan | :hongbin, when you available, pls ping me if possible. Not sure what you changed, I hit some issue when tested with the new image | 02:00 |
hongbin | Kennan: Try this fix https://review.openstack.org/#/c/166661/1 | 02:01 |
sdake | hongbin https://bugzilla.redhat.com/show_bug.cgi?id=1200924 | 02:01 |
openstack | bugzilla.redhat.com bug 1200924 in kubernetes "typo in /etc/kubernetes/kubelet" [Unspecified,On_qa] - Assigned to jchaloup | 02:01 |
hongbin | sdake: ...... | 02:02 |
hongbin | if it is a bug, then I don't need to worry | 02:02 |
sdake | yes but I want it to work now not in 3 weeks when there is a new image :) | 02:03 |
sdake | and our tempalte doesn't set the variable *at all* atm | 02:03 |
sdake | that is why its 127.0.0.1 | 02:03 |
hongbin | sure. | 02:04 |
sdake | on-qa takes several weeks to hit the repo | 02:04 |
sdake | maybe more then a month | 02:04 |
sdake | and then we will have a new version of kubernetes to worry about :( | 02:04 |
hongbin | yup | 02:05 |
sdake | i'd like to keep our image churn down as much as possible | 02:05 |
sdake | so we are testing on older versions of k8s | 02:06 |
sdake | until we throw in a new one | 02:06 |
sdake | because it has taken 16 hours of engineering work to fix this problem | 02:06 |
hongbin | sure | 02:06 |
hongbin | yes, their release model is slow | 02:06 |
sdake | they want to do 2 week releases but they have no gating | 02:07 |
sdake | all the testing is manual | 02:07 |
sdake | they could have per commit releaes if they had gating ;) | 02:07 |
hongbin | you know something inside redhat :) | 02:07 |
sdake | I worked there for 9 years | 02:08 |
hongbin | :) | 02:08 |
sdake | the 2 weeks thing is on the public ml | 02:08 |
sdake | did that pod start up btw | 02:09 |
hongbin | Yes, Running now | 02:12 |
*** erkules_ has joined #openstack-containers | 02:16 | |
sdake | cool after you finish that pull request i'll merge it into my patch stream | 02:16 |
*** sdake__ has joined #openstack-containers | 02:18 | |
*** erkules has quit IRC | 02:18 | |
*** sdake has quit IRC | 02:22 | |
hongbin | sdake: check here https://github.com/hongbin/heat-kubernetes/commit/9895fca3f45e67871c1c838ad3c15a45d5bc17a5 | 02:24 |
hongbin | I need to test it first | 02:25 |
hongbin | Let's me test it againest the new and old image. Then will send the PR | 02:25 |
dims__ | hongbin: doesn't the :8080 be inside the last single quote? | 02:27 |
hongbin | dims__: let's check | 02:30 |
dims__ | i am probably wrong :) | 02:30 |
hongbin | NP. I will verify that | 02:31 |
Kennan | :hongbin, I still hit the issue | 02:31 |
Kennan | [ 393.804859] cloud-init[842]: 2015-03-23 02:30:55,131 - util.py[WARNING]: Failed running /var/lib/cloud/instance/scripts/part-009 [7] [ 395.487065] cloud-init[842]: Created symlink from /etc/systemd/system/multi-user.target.wants/flannel-config.service to /etc/systemd/system/flannel-config.service. [ 413.661686] cloud-init[842]: 2015-03-23 02:31:14,993 - cc_scripts_user.py[WARNING]: Failed to run module scripts-user | 02:31 |
hongbin | Kennan: will get back to you. Sorry :) | 02:32 |
sdake__ | kennan i'm all yours | 02:32 |
sdake__ | the issue you are having is what (rather then a log, describe the problem) | 02:33 |
sdake__ | looks like part-009 failed | 02:33 |
sdake__ | could you paste that file | 02:34 |
sdake__ | /var/lib/cloud/instances/scripts/part-009 | 02:34 |
Kennan | jas | 02:34 |
Kennan | hi sdke__ | 02:36 |
Kennan | :sdake__ | 02:36 |
Kennan | [fedora@kqcontainer magnum]$ ls -l /var/lib/cloud/instance/scripts/ total 0 | 02:36 |
Kennan | it is empty | 02:36 |
sdake__ | sudo | 02:37 |
Kennan | yes | 02:37 |
Kennan | still empty | 02:37 |
sdake__ | still empty? | 02:37 |
Kennan | in the controller | 02:37 |
sdake__ | you got that error on the master? | 02:37 |
sdake__ | sudo -i | 02:37 |
sdake__ | cd /var/lib/cloud | 02:37 |
sdake__ | run ls | 02:38 |
sdake__ | ls -l that is | fpaste | 02:38 |
sdake__ | kennan I have tosa yyour host name doesn't make any sense | 02:39 |
Kennan | I just run that command on devstack machine, not the nova master instance | 02:39 |
sdake__ | are you running that ls command from the vm? | 02:39 |
sdake__ | oh login to the nova master :) | 02:39 |
sdake__ | k8s master i mean | 02:39 |
sdake__ | you should beable to ssh into it using the ssh key you registered with baymodel-create | 02:40 |
Kennan | -bash-4.3# ls -l total 0 drwxr-xr-x. 2 root root 6 Mar 20 14:52 per-boot drwxr-xr-x. 2 root root 6 Mar 20 14:52 per-instance drwxr-xr-x. 2 root root 6 Mar 20 14:52 per-once drwxr-xr-x. 2 root root 6 Mar 20 14:52 vendor | 02:40 |
Kennan | -bash-4.3# pwd /var/lib/cloud/scripts | 02:40 |
sdake__ | I htink we have some lag :) | 02:40 |
sdake__ | ssh into the node where you saw that flannel error | 02:41 |
sdake__ | need to know what is in part-009 | 02:41 |
Kennan | yes, I ssh into that instnace | 02:41 |
sdake__ | is the part file there now? | 02:41 |
sdake__ | does anyone know how to redirect stdout to paste.openstack.org? | 02:42 |
Kennan | https://gist.github.com/HackToday/41dc302924032870dc1f | 02:43 |
sdake__ | paste /etc/sysconfig/heat-params | 02:44 |
Kennan | https://gist.github.com/HackToday/41dc302924032870dc1f | 02:45 |
sdake__ | sudo run the file | 02:46 |
sdake__ | echo $? | 02:46 |
sdake__ | and paste output | 02:46 |
Kennan | done, same link as above | 02:47 |
sdake__ | http://curl.haxx.se/libcurl/c/libcurl-errors.html | 02:49 |
sdake__ | exit code 7 = couldn't connect | 02:49 |
sdake__ | does that ip # look familiar to you? | 02:49 |
sdake__ | what host os are you running | 02:50 |
sdake__ | is 9.5.124.71 your heat server's ip address? | 02:51 |
Kennan | yes, I used public ip | 02:51 |
Kennan | endpoint all registed with such public ip | 02:52 |
sdake__ | from inside the vm | 02:52 |
sdake__ | use curl to retrieve www.google.com | 02:52 |
sdake__ | does that work? | 02:53 |
sdake__ | you either have an iptables problem or a networking problem | 02:53 |
*** dboik has joined #openstack-containers | 02:53 | |
sdake__ | port 8000 must be open on 9.5.124.71 | 02:53 |
Kennan | my instnace can connect to external network. | 02:54 |
sdake__ | can your instance connect to 9.5.124.71? | 02:55 |
sdake__ | from inside vm, try ssh 9.5.124.71 and see what happens | 02:55 |
*** yuanying has quit IRC | 02:56 | |
Kennan | sorry, when you talked about instances, do you mean kube instance or my devstack Vm instance ? | 02:57 |
Kennan | I installed devstack with vm | 02:57 |
sdake__ | the kube instance, the place where you read that file from | 02:57 |
*** achanda has joined #openstack-containers | 02:58 | |
Kennan | the kubeinstance can ping 9.5.124.71, but it can not access other internet address, I remmeber and tested before, kube instance not need to access internet address | 02:58 |
sdake__ | hongbin your patch looks good | 03:00 |
sdake__ | kennan scp telnet to the kube instance | 03:01 |
hongbin | thx | 03:01 |
*** dims__ has quit IRC | 03:01 | |
sdake__ | then from teh kube instance telnet 9.5.124.71:8080 | 03:01 |
sdake__ | and also the kube instances *do* need to access the external internet | 03:02 |
sdake__ | to download docker images | 03:02 |
Kennan | ? so I think I have not synch about this. is this a new change ? | 03:03 |
*** achanda has quit IRC | 03:03 | |
Kennan | I remembered not need internet access before | 03:03 |
sdake__ | its always been so | 03:03 |
sdake__ | did you launch a pod? | 03:03 |
Kennan | if you said is true, my env need rebuild, as kubenets instance can not directly access internet address | 03:03 |
sdake__ | you uusally can use masquarding via iptables to deal with that (hack around it) | 03:04 |
sdake__ | lets focus on why you can't access the heat server | 03:04 |
Kennan | seems k8s instance not install telnet | 03:05 |
Kennan | can not test telnet | 03:05 |
sdake__ | yes you will need to scp from your host | 03:05 |
sdake__ | (these are all my hacks for getting around the pain of a small iamge :) | 03:05 |
sdake__ | what should really be done is a toolbox container should be build and bindmounted to a host dir like /usr/local/bin ;) | 03:06 |
hongbin | sdake__: This might be irrelevant. I saw there is still one failure | 03:07 |
hongbin | $ sudo systemctl | grep failed ● cloud-final.service loaded failed failed Execute cloud user/final scripts | 03:07 |
sdake__ | hongbin I'm curious if that is the same problem I am chasing with kennan | 03:07 |
sdake__ | can you run journalctl hongbin? | 03:07 |
hongbin | k | 03:07 |
*** sdake has joined #openstack-containers | 03:09 | |
*** yuanying has joined #openstack-containers | 03:09 | |
hongbin | here you go: http://paste.openstack.org/show/195123/ | 03:09 |
sdake | hongbin show me yoru part-005 file | 03:11 |
sdake | it is backtracing | 03:11 |
hongbin | ?? | 03:11 |
hongbin | never mind. get that | 03:12 |
sdake | ProcessExecutionError: Unexpected error while running command. | 03:12 |
sdake | Command: ['/var/lib/cloud/instance/scripts/part-005'] | 03:12 |
sdake | Exit code: 6 | 03:12 |
sdake | 03:12 | |
sdake | could we make it harder please :) | 03:12 |
*** sdake__ has quit IRC | 03:12 | |
hongbin | http://paste.openstack.org/show/195124/ | 03:12 |
sdake | run that script | 03:13 |
hongbin | http://paste.openstack.org/show/195128/ | 03:14 |
hongbin | ............ | 03:14 |
hongbin | never mind. I run it again | 03:15 |
sdake | run with bash ;) | 03:15 |
hongbin | http://paste.openstack.org/show/195129/ | 03:15 |
hongbin | oh. bash. I run with sh | 03:15 |
sdake | cat /etc/group | 03:16 |
hongbin | http://paste.openstack.org/show/195130/ | 03:16 |
sdake | change docker in that script to dockrroot | 03:17 |
sdake | pull request - win :) | 03:17 |
sdake | dockerroot | 03:17 |
sdake | can atomic change more please - need to spin more cycles :) | 03:17 |
hongbin | so, in the heat template, overwrite that file, or ... | 03:20 |
sdake | sec | 03:21 |
hongbin | oh, never mind. it is user-data. Just change it. Get that | 03:22 |
sdake | ya its in fragments | 03:22 |
sdake | do em as separate prs pls | 03:22 |
sdake | since they are separate bugs | 03:22 |
hongbin | sure | 03:22 |
*** coolsvap|afk is now known as coolsvap | 03:24 | |
Kennan | :sdake seems 8080 is not opened on. even on devstack vm itself. is 8080 or 8000? | 03:27 |
sdake | 8080 is heat-api-cfn | 03:28 |
sdake | my guess is that process is not running on your devstack | 03:28 |
sdake | heat-api-cfn should bve running in your devstack environment, or waitconditions wont operate | 03:29 |
sdake | don't ask me why :) | 03:29 |
sdake | bad design choice long ago | 03:29 |
sdake | now stuck with it | 03:29 |
Kennan | INFO heat.api.cfn [-] Starting Heat API on 0.0.0.0:8000 | 03:31 |
Kennan | I restarted again, it seems 8000 | 03:31 |
Kennan | ? | 03:31 |
sdake | what port does heat-api run on? | 03:31 |
sdake | grep 8080 /etc/heat/* | 03:31 |
Kennan | 8004 | 03:31 |
sdake | how do you spell "Pick one" in chinese | 03:33 |
sdake | is it goey gobah? | 03:33 |
sdake | watching the grandmaster | 03:33 |
sdake | great movie | 03:33 |
sdake | its a really bad sign when your github streak is 7 days | 03:36 |
*** kebray has joined #openstack-containers | 03:42 | |
*** kebray has joined #openstack-containers | 03:43 | |
hongbin | sdake: Just send two pull request. | 03:48 |
sdake | hongbin tested working in a fresh environment? | 03:48 |
hongbin | I tested them in new image. Work well | 03:48 |
sdake | no more systemctl errors? | 03:49 |
hongbin | I bring up a new cluster | 03:49 |
hongbin | no more systemctl errors | 03:49 |
hongbin | one thing is if it is important to test it in old image? | 03:49 |
sdake | I think we can ask people to suck it up | 03:50 |
sdake | and move to a new image | 03:50 |
hongbin | k | 03:50 |
hongbin | OK. All done | 03:50 |
sdake | if you got time to burn feel free to try an old iamge | 03:51 |
sdake | it would be nice to say i tworks there too but I doubt it would because of the sed on the nonexistent line | 03:51 |
hongbin | k | 03:52 |
hongbin | Then, I won't mind the old image | 03:52 |
hongbin | I go to sleep soon. Just leave me a message if anything I need to follow up | 03:53 |
sdake | can you wait a moment until I get these pull requests rebased so you can +2 the patch set | 03:54 |
hongbin | sure | 03:54 |
hongbin | brb | 03:54 |
*** kebray has quit IRC | 04:01 | |
*** kebray has joined #openstack-containers | 04:02 | |
openstackgerrit | Steven Dake proposed stackforge/magnum: Modify documentation to point to kubernetes-0.11 atomic image https://review.openstack.org/166662 | 04:05 |
openstackgerrit | Steven Dake proposed stackforge/magnum: Merge heat-kubernetes pull request 15 https://review.openstack.org/166687 | 04:05 |
openstackgerrit | Steven Dake proposed stackforge/magnum: Merge heat-kubernetes pull request 16 https://review.openstack.org/166688 | 04:05 |
sdake | hongbin all yours to doublecheck my work :) | 04:05 |
*** vilobhmm has joined #openstack-containers | 04:06 | |
*** dims__ has joined #openstack-containers | 04:06 | |
hongbin | .... | 04:07 |
sdake | is there a problem? | 04:08 |
sdake | just check the files | 04:08 |
*** achanda has joined #openstack-containers | 04:08 | |
sdake | hitting exhaustion levels atm :( | 04:09 |
hongbin | :) | 04:09 |
hongbin | I haven't verify this one https://review.openstack.org/#/c/166662/2 | 04:09 |
hongbin | you still want me to +2 on it? | 04:10 |
sdake | no | 04:10 |
hongbin | yup :) | 04:10 |
hongbin | Then, all good | 04:10 |
hongbin | K. Then see you folks. | 04:11 |
sdake | hongbin enjioy | 04:12 |
sdake | thanks for the monster debug session - it was good :) | 04:12 |
*** hongbin has quit IRC | 04:12 | |
*** dboik has quit IRC | 04:21 | |
*** dboik has joined #openstack-containers | 04:21 | |
*** adrian_otto has joined #openstack-containers | 04:29 | |
*** adrian_otto has quit IRC | 04:33 | |
*** achanda has quit IRC | 04:34 | |
*** adrian_otto has joined #openstack-containers | 04:34 | |
*** dims__ has quit IRC | 04:36 | |
*** sdake__ has joined #openstack-containers | 04:40 | |
*** adrian_otto has quit IRC | 04:41 | |
openstackgerrit | Madhuri Kumari proposed stackforge/magnum: Handle heat exception in create_stack. https://review.openstack.org/166694 | 04:41 |
*** sdake has quit IRC | 04:44 | |
*** adrian_otto has joined #openstack-containers | 04:51 | |
*** sdake has joined #openstack-containers | 04:51 | |
*** sdake__ has quit IRC | 04:55 | |
*** Marga_ has quit IRC | 05:01 | |
*** achanda has joined #openstack-containers | 05:08 | |
*** sdake__ has joined #openstack-containers | 05:18 | |
*** Marga_ has joined #openstack-containers | 05:21 | |
*** adrian_otto has quit IRC | 05:22 | |
*** sdake has quit IRC | 05:22 | |
*** achanda has quit IRC | 05:24 | |
*** achanda has joined #openstack-containers | 05:26 | |
*** suro-patz has joined #openstack-containers | 05:32 | |
*** sdake__ has quit IRC | 05:33 | |
*** sdake has joined #openstack-containers | 05:34 | |
*** vilobhmm has quit IRC | 05:50 | |
openstackgerrit | Steven Dake proposed stackforge/magnum: Modify documentation to point to kubernetes-0.11 atomic image https://review.openstack.org/166662 | 05:50 |
sdake | time to PTFO | 05:51 |
sdake | night all | 05:51 |
sdake | thanks yuanying for testing that patch :) | 05:51 |
*** sdake has quit IRC | 05:51 | |
openstackgerrit | Digambar proposed stackforge/magnum: Add cluster_type field in baymodel. https://review.openstack.org/165346 | 06:15 |
*** kebray has quit IRC | 06:15 | |
*** Marga_ has quit IRC | 06:21 | |
*** dims__ has joined #openstack-containers | 06:22 | |
*** diga has joined #openstack-containers | 06:24 | |
openstackgerrit | Digambar proposed stackforge/magnum: Add cluster_type field in baymodel. https://review.openstack.org/165346 | 06:32 |
*** oro has joined #openstack-containers | 06:36 | |
*** suro-patz has quit IRC | 06:48 | |
*** dims__ has quit IRC | 06:54 | |
*** Marga_ has joined #openstack-containers | 06:57 | |
*** nshaikh has joined #openstack-containers | 06:59 | |
diga | yuanying: Hi | 07:02 |
diga | again tests are failing - https://review.openstack.org/#/c/165346/ | 07:02 |
yuanying | hi | 07:02 |
yuanying | you should add a test about `token` field if cluster_type is coreos. | 07:02 |
diga | okay | 07:03 |
yuanying | https://review.openstack.org/#/c/165346/4/magnum/tests/conductor/handlers/test_bay_k8s_heat.py | 07:06 |
yuanying | please remove cluster_type from baymodel_dict in setup method. | 07:06 |
yuanying | and add a test method, which is check `token` field and change cluster_type to `coreos` in baymodel_dict. | 07:07 |
yuanying | maybe test method will be `test_extract_bay_definition_with_cluster_type_coreos` | 07:08 |
yuanying | and assert `token` field is contained in bay_definition. | 07:09 |
openstackgerrit | Digambar proposed stackforge/magnum: Add cluster_type field in baymodel. https://review.openstack.org/165346 | 07:13 |
diga | okay | 07:14 |
diga | let me correct it | 07:14 |
*** Marga_ has quit IRC | 07:29 | |
*** achanda has quit IRC | 07:29 | |
openstackgerrit | Motohiro/Yuanying Otsuka proposed stackforge/magnum: Add cluster_type field in baymodel. https://review.openstack.org/165346 | 07:38 |
yuanying | Hi diga | 07:39 |
yuanying | I have fixed it. | 07:39 |
yuanying | Please check | 07:40 |
diga | okay | 07:43 |
diga | yuanying: Thanks :) | 07:44 |
openstackgerrit | Madhuri Kumari proposed stackforge/magnum: Adding support of python-k8client. https://review.openstack.org/166720 | 07:50 |
*** junhongl has quit IRC | 08:25 | |
*** mahito_ has joined #openstack-containers | 08:36 | |
*** dims__ has joined #openstack-containers | 08:39 | |
*** mahito has quit IRC | 08:39 | |
diga | tcammann: Hi | 08:45 |
diga | I have filed TechDebt - https://bugs.launchpad.net/magnum/+bug/1435200 | 08:45 |
openstack | Launchpad bug 1435200 in Magnum "Tech Debt: Add enum type for cluster_type field in baymodel api." [Wishlist,Confirmed] - Assigned to Digambar (digambarpatil15) | 08:45 |
diga | will do seperate patch for that because its improvement in api. | 08:46 |
*** yuanying has quit IRC | 08:54 | |
*** Tango has joined #openstack-containers | 08:57 | |
*** dims__ has quit IRC | 09:12 | |
*** erkules_ is now known as erkules | 09:14 | |
*** erkules has quit IRC | 09:14 | |
*** erkules has joined #openstack-containers | 09:14 | |
*** oro has quit IRC | 09:25 | |
*** Tango has quit IRC | 09:27 | |
*** dims__ has joined #openstack-containers | 09:51 | |
*** junhongl has joined #openstack-containers | 10:31 | |
*** junhongl has quit IRC | 10:35 | |
*** mahito_ has quit IRC | 10:38 | |
*** oro has joined #openstack-containers | 10:42 | |
*** jay-lau-513 has joined #openstack-containers | 10:48 | |
*** junhongl has joined #openstack-containers | 10:51 | |
*** junhongl has quit IRC | 10:56 | |
*** coolsvap is now known as coolsvap|afk | 11:11 | |
*** zul has joined #openstack-containers | 11:32 | |
*** nshaikh has quit IRC | 11:39 | |
*** EricGonczer_ has joined #openstack-containers | 11:54 | |
*** EricGonczer_ has quit IRC | 12:02 | |
*** EricGonczer_ has joined #openstack-containers | 12:22 | |
*** EricGonczer_ has quit IRC | 12:24 | |
*** dims__ has quit IRC | 12:34 | |
*** dims__ has joined #openstack-containers | 12:34 | |
*** dims__ is now known as dims | 12:52 | |
*** dboik has quit IRC | 12:58 | |
*** Marga_ has joined #openstack-containers | 12:59 | |
*** thomasem has joined #openstack-containers | 13:04 | |
*** thomasem has quit IRC | 13:12 | |
*** thomasem has joined #openstack-containers | 13:16 | |
*** dboik has joined #openstack-containers | 13:18 | |
*** coolsvap|afk is now known as coolsvap | 13:25 | |
*** kebray has joined #openstack-containers | 13:28 | |
*** sdake has joined #openstack-containers | 13:57 | |
*** sdake__ has joined #openstack-containers | 13:59 | |
*** sdake has quit IRC | 14:01 | |
*** EricGonczer_ has joined #openstack-containers | 14:04 | |
*** prad has joined #openstack-containers | 14:04 | |
*** coolsvap is now known as coolsvap|afk | 14:05 | |
*** Marga_ has quit IRC | 14:05 | |
*** julim has joined #openstack-containers | 14:07 | |
*** adrian_otto has joined #openstack-containers | 14:09 | |
sdake__ | yuanying you in? | 14:12 |
sdake__ | madhuri nice job on k8sclient!! | 14:12 |
sdake__ | that blueprint has been sitting around unsolved for months | 14:13 |
sdake__ | finally some action :) | 14:13 |
*** sdake has joined #openstack-containers | 14:16 | |
openstackgerrit | Steven Dake proposed stackforge/magnum: Modify documentation to point to kubernetes-0.11 atomic image https://review.openstack.org/166662 | 14:18 |
*** coolsvap has joined #openstack-containers | 14:19 | |
*** sdake__ has quit IRC | 14:20 | |
*** sdake__ has joined #openstack-containers | 14:34 | |
*** sdake__ has quit IRC | 14:34 | |
*** achanda has joined #openstack-containers | 14:34 | |
*** sdake__ has joined #openstack-containers | 14:34 | |
*** sdake has quit IRC | 14:37 | |
*** hongbin has joined #openstack-containers | 14:38 | |
*** Marga_ has joined #openstack-containers | 14:38 | |
*** achanda has quit IRC | 14:41 | |
*** junhongl has joined #openstack-containers | 14:46 | |
*** Marga_ has quit IRC | 14:46 | |
*** Marga_ has joined #openstack-containers | 14:46 | |
*** junhongl has quit IRC | 14:50 | |
*** adrian_otto has quit IRC | 14:52 | |
*** junhongl has joined #openstack-containers | 14:59 | |
*** EricGonczer_ has quit IRC | 15:03 | |
*** EricGonczer_ has joined #openstack-containers | 15:03 | |
*** adrian_otto has joined #openstack-containers | 15:04 | |
*** junhongl has quit IRC | 15:04 | |
*** daneyon_ has joined #openstack-containers | 15:19 | |
*** daneyon has quit IRC | 15:22 | |
*** hongbin has quit IRC | 15:38 | |
*** Tango has joined #openstack-containers | 15:42 | |
*** oro has quit IRC | 16:10 | |
*** oro has joined #openstack-containers | 16:11 | |
*** coolsvap has quit IRC | 16:19 | |
*** Marga_ has quit IRC | 16:25 | |
*** dboik_ has joined #openstack-containers | 16:31 | |
*** suro-patz has joined #openstack-containers | 16:32 | |
*** junhongl has joined #openstack-containers | 16:34 | |
*** dboik has quit IRC | 16:35 | |
*** junhongl has quit IRC | 16:38 | |
*** unicell has joined #openstack-containers | 16:40 | |
*** junhongl has joined #openstack-containers | 16:46 | |
*** Marga_ has joined #openstack-containers | 16:47 | |
*** oro has quit IRC | 16:48 | |
*** oro has joined #openstack-containers | 16:51 | |
*** junhongl has quit IRC | 16:53 | |
*** diga_ has joined #openstack-containers | 16:53 | |
*** vilobhmm has joined #openstack-containers | 16:58 | |
*** vilobhmm has quit IRC | 16:58 | |
*** vilobhmm has joined #openstack-containers | 16:58 | |
*** vilobhmm1 has joined #openstack-containers | 16:59 | |
diga_ | sdake__: Hi | 17:00 |
sdake__ | how can i help diga | 17:00 |
diga_ | check this - https://review.openstack.org/#/c/165346/ | 17:01 |
diga_ | https://review.openstack.org/#/c/164971/ | 17:01 |
sdake__ | I agree with Tom the name is bad | 17:02 |
sdake__ | I know its bikeshedding | 17:02 |
sdake__ | but this stuff is permanent | 17:02 |
sdake__ | how about cluster_model? | 17:03 |
*** vilobhmm has quit IRC | 17:03 | |
sdake__ | this is a little more generic | 17:03 |
diga_ | yep | 17:03 |
diga_ | can I do that change in the https://bugs.launchpad.net/magnum/+bug/1435200 ? | 17:04 |
openstack | Launchpad bug 1435200 in Magnum "Tech Debt: Add enum type for cluster_type field in baymodel api." [Wishlist,Confirmed] - Assigned to Digambar (digambarpatil15) | 17:04 |
diga_ | beucase I have filed seperate TechDebt for that | 17:05 |
sdake__ | I think techdebtt is not appropriate in this case | 17:05 |
sdake__ | I think the variable name needs to be changed | 17:05 |
diga_ | okay | 17:05 |
diga_ | okay | 17:05 |
sdake__ | let me review the patch in detail sec | 17:05 |
diga_ | ok | 17:05 |
diga_ | .3 | 17:15 |
sdake__ | diga done | 17:15 |
sdake__ | needs more love | 17:15 |
sdake__ | have any questions? | 17:17 |
sdake__ | I think cluster_os is best probably | 17:17 |
sdake__ | we can figure out ironic vs virt from the flavor | 17:17 |
sdake__ | and need a cluster_coe | 17:17 |
sdake__ | some followup work ;) | 17:18 |
diga_ | sorry away for sometime | 17:18 |
diga_ | yes, I saw your comments | 17:18 |
diga_ | sdake__: can I replce cluster_type with cluster_os ? | 17:21 |
sdake__ | wfm as long a syou get rid of ironic from the defines | 17:22 |
sdake__ | because ironic is not an os | 17:22 |
*** Marga_ has quit IRC | 17:22 | |
diga_ | yes | 17:23 |
diga_ | Heyy will ping you once I am done with the implementation | 17:24 |
diga_ | its around 11PM at my side | 17:24 |
diga_ | Have a nice day :) | 17:25 |
*** harlowja has joined #openstack-containers | 17:27 | |
diga_ | sdake__: I will make that change appropriate as discussed | 17:27 |
*** achanda has joined #openstack-containers | 17:29 | |
sdake__ | get some sleep :) | 17:29 |
*** junhongl has joined #openstack-containers | 17:33 | |
*** junhongl has quit IRC | 17:37 | |
*** oro has quit IRC | 17:38 | |
*** achanda has quit IRC | 17:57 | |
*** daneyon has joined #openstack-containers | 18:01 | |
*** daneyon_ has quit IRC | 18:01 | |
adrian_otto | hi team. I was discussing Diga's https://review.openstack.org/165346 patch. The idea of cluster_type could be conflated to also include attributes for platform, discro, coe, etc… rather than just a name like 'coreos'. | 18:05 |
adrian_otto | one approach is to have additional attributes on the baymodel resource for platform, discro, coe, etc. | 18:05 |
adrian_otto | another approach (possibly more DRY style) would be to have a custer_model resource that would hold those attributes that could be used by multiple baymodels | 18:06 |
adrian_otto | the advantage of packing it all into baymodel is that it keeps the object model and API simpler | 18:06 |
adrian_otto | the advantage of having a new resource for this is that cloud operators are likely to have an opinionated view on what combinations they support, and can have a small number of cluster_type resources that a larger number of baymodel resources would reference. | 18:07 |
adrian_otto | thoughts on this? | 18:07 |
adrian_otto | s/discro/distro/g | 18:08 |
diga_ | having a seperate cluster_model resource will be good for long term | 18:09 |
adrian_otto | another issue to consider is that if we start using enum's, we are applying a level of regidity. To define new bominations as a cloud operator, you would need to make code changes (bad!) | 18:10 |
adrian_otto | *combinations | 18:10 |
diga_ | hmm | 18:11 |
adrian_otto | so maybe having a cluster_model resource with text fields for each of the attributes would allow a higher degree of customization without making code changes. | 18:11 |
diga_ | yes | 18:11 |
diga_ | sounds good | 18:12 |
adrian_otto | that's both a good thing, and a bad thing at the same time. Clear as mud! | 18:12 |
diga_ | sdake__: you there ? | 18:12 |
diga_ | :) | 18:12 |
adrian_otto | He might be sdake_ or sdake__ | 18:13 |
*** Marga_ has joined #openstack-containers | 18:14 | |
diga_ | he is active on sdake__ 'i guess | 18:14 |
*** achanda has joined #openstack-containers | 18:15 | |
diga_ | by the way, having a seperate cluster_model resource is fit for our usecase | 18:15 |
apmelton | adrian_otto: diga_: aren't those fields all represented on the image? | 18:15 |
diga_ | no | 18:15 |
apmelton | what do we mean by platform, distro, and coe? | 18:16 |
diga_ | coreo, fedora, swarm, ironic | 18:16 |
diga_ | we are talking about these | 18:17 |
apmelton | platform = vm, ironic; | 18:17 |
apmelton | distro = coreos, atomic; | 18:17 |
apmelton | coe = k8s,swarm; | 18:17 |
apmelton | like that? | 18:17 |
diga_ | yes | 18:17 |
*** harlowja has quit IRC | 18:18 | |
adrian_otto | those are image independent | 18:18 |
apmelton | platform is on the flavor, distro is on the image | 18:18 |
apmelton | coe is the only thing we're adding | 18:18 |
adrian_otto | oh, I see what you mean now | 18:19 |
apmelton | what does coe stand for? | 18:19 |
adrian_otto | coe = container orchestration environment | 18:19 |
apmelton | gootcha | 18:19 |
apmelton | was this discussed at the meeting last week? | 18:19 |
adrian_otto | it's an acronym we made up about a week ago | 18:19 |
*** achanda has quit IRC | 18:19 | |
apmelton | apologies for missing that, was in basically a week long set of meeting | 18:19 |
*** achanda has joined #openstack-containers | 18:20 | |
adrian_otto | we have not really talked about this design decision outside of the review I linked above. | 18:20 |
adrian_otto | I suppose the trouble happens when Magnum is looking at a baymodel resource | 18:21 |
*** harlowja has joined #openstack-containers | 18:21 | |
adrian_otto | to create the bay, it needs to know if you want coreos on ironic, or coreos on vms, or…. | 18:21 |
apmelton | I can understand not wanting to query nova/glance every time | 18:21 |
adrian_otto | so referring to the glance image and flavor from the baymodel could surface the answer to that potential ambiguity | 18:22 |
apmelton | I just don't think we want to rely on the user knowing with image X I need to use distro X, and ditto with platform/flavor | 18:22 |
*** vilobhmm11 has joined #openstack-containers | 18:22 | |
adrian_otto | chances are that baymodels will be prepared by cloud operators, not users. | 18:22 |
apmelton | adrian_otto: true | 18:23 |
apmelton | they are basically flavors, which are prepared by operators | 18:23 |
adrian_otto | #link https://github.com/stackforge/magnum/blob/master/magnum/objects/baymodel.py Baymodel | 18:23 |
*** vilobhmm1 has quit IRC | 18:23 | |
adrian_otto | we already have image_id and flavor_id on there | 18:24 |
*** diga_ has quit IRC | 18:24 | |
*** diga_ has joined #openstack-containers | 18:25 | |
adrian_otto | the image_id will not yield the distro information… unless that metadata has been added by Magnum or Magnum's administrator somehow | 18:26 |
diga_ | sorry got disconnected | 18:26 |
adrian_otto | diga: https://github.com/stackforge/magnum/blob/master/magnum/objects/baymodel.py baymodel already has references to the flavor_id and image_id | 18:26 |
*** nachiket has quit IRC | 18:27 | |
diga_ | okay | 18:27 |
adrian_otto | the flavor tells us the platform already | 18:27 |
*** nachiket has joined #openstack-containers | 18:27 | |
adrian_otto | so the question comes about the distro, which could be added to image metadata | 18:27 |
*** achanda has quit IRC | 18:27 | |
*** Marga_ has quit IRC | 18:27 | |
*** achanda has joined #openstack-containers | 18:28 | |
apmelton | https://wiki.openstack.org/wiki/Glance-common-image-properties-os_distro | 18:28 |
adrian_otto | ok, so that value could be set to 'coreos' | 18:29 |
apmelton | adrian_otto: looks like we're a little non-standard, we're setting it to com.coreos on our public images | 18:29 |
diga_ | adrian_otto: flavor = m1.small etc, right ? | 18:31 |
adrian_otto | RAX is a perfect example of what not to do ;-) | 18:31 |
adrian_otto | diga_: yes, the flavor is the arrangement of the "machine" (RAM, DISK, Network, etc) | 18:32 |
diga_ | yep | 18:32 |
adrian_otto | m1.small is the common name of a popular flavor | 18:32 |
diga_ | yes | 18:33 |
apmelton | flavor is going to be a bit harder to map to platform | 18:33 |
apmelton | I'm not sure there's a standard at all for representing that | 18:33 |
adrian_otto | that could be in the baymodel | 18:34 |
diga_ | check this - https://github.com/openstack/glance/blob/master/glance/db/sqlalchemy/models.py | 18:34 |
diga_ | ImageProperty model | 18:34 |
apmelton | diga_: flavor's have something similar called extra_specs: https://github.com/openstack/nova/blob/master/nova/db/sqlalchemy/models.py#L1063 | 18:36 |
apmelton | but I don't think there's a standard extra spec to differentiate a VM flavor from an Ironic flavor | 18:37 |
adrian_otto | diga, while I'm reflecting on this, can you remind me what software is going to read the type value, and what will it mean? | 18:38 |
adrian_otto | it it just the bay creation code? | 18:39 |
diga_ | yes | 18:39 |
*** nachiket has quit IRC | 18:39 | |
*** nachiket has joined #openstack-containers | 18:40 | |
*** dboik_ has quit IRC | 18:40 | |
adrian_otto | ok, so in the interest of keeping things simple, I am going to reply on the comment stream of the review indicating that we should just implement this as attributes on the baymodel | 18:40 |
adrian_otto | that the architecture is already on the flavor, and it does not need an attribute on baymodel | 18:40 |
adrian_otto | we can do a regex string match for 'coreos' on the image resource linked to the baymodel.image_id | 18:42 |
adrian_otto | so we just need the attribute that means custer_coe_type | 18:42 |
diga_ | okay | 18:43 |
adrian_otto | and until there is some plugin mechanism for defining new ones, that can just be a text enum type | 18:43 |
diga_ | okay | 18:43 |
adrian_otto | apmelton: what do you think? | 18:43 |
apmelton | I don't think 'that the architecture is already on the flavor, and it does not need an attribute on baymodel' is actually correct | 18:45 |
apmelton | I was wrong when I said platform was represented by/on the flavor | 18:45 |
apmelton | this may just be a rackspace issues where we're not following standard flavor naming mechanisms | 18:45 |
*** s_albtraum has joined #openstack-containers | 18:46 | |
adrian_otto | ok | 18:47 |
adrian_otto | so the question boils down to this... | 18:47 |
adrian_otto | do we express each of these things in the baymodel, or do we have an additional culstermodel? | 18:48 |
diga_ | currently we are defining each of these things in baymodel | 18:48 |
adrian_otto | if we have a baymodel, and it has the platform on it, then in order to support that bay type on multiple platforms, then we'd need to have a multiplicity of baymodel resources that are all very similar | 18:48 |
apmelton | if there are going to be things that a operator will want to set in stone, a separate clustermodel might be adventageous | 18:49 |
adrian_otto | rather than a single entry that could have a list of available clustermodels it can be used with. | 18:49 |
diga_ | I dont thing we have platform at baymodel | 18:49 |
adrian_otto | we don't yet, but sdake__ suggested we have some way to express it. | 18:49 |
diga_ | yes, we can define it in baymodel | 18:50 |
*** Marga_ has joined #openstack-containers | 18:50 | |
apmelton | so I guess the way I could see it, ClusterModel are the models the operator wants to support, and BayModel holds the user tweakable arguments | 18:51 |
diga_ | but adrian_otto having clustermodel will be good | 18:51 |
sdake__ | hey sorry was on a call | 18:52 |
*** dboik has joined #openstack-containers | 18:52 | |
sdake__ | letme catch up | 18:52 |
adrian_otto | apmelton: we don't currently have the concept of a readonly resource in magnum (or writable only by admin) | 18:52 |
adrian_otto | that would be a new concept | 18:52 |
adrian_otto | to some extent this is limited by which flavors are defined | 18:54 |
diga_ | apmelton: aggeed! | 18:54 |
diga_ | agrred! | 18:54 |
diga_ | s/agrred/agreed | 18:54 |
diga_ | if someone wants to specify extra properties, then we should have ClusterModel in place | 18:54 |
*** oro has joined #openstack-containers | 18:55 | |
diga_ | adrian_otto, apmelton - its 12:30AM at my side, I will try to attend the tomorrows meeting. | 18:55 |
diga_ | Have a nice day :) | 18:56 |
adrian_otto | diga_: thanks. I'll put it on the agenda. | 18:56 |
apmelton | have a good night diga_! | 18:56 |
adrian_otto | g'nite! | 18:56 |
diga_ | adrian_otto: sure | 18:56 |
apmelton | so, with all this talk of renaming columns on baymodels, I was thinking about some columns on the bay | 18:57 |
apmelton | and what they're actually used for | 18:57 |
adrian_otto | we might also want some way to pass parameters into a bay create call rather than relying only on the baymodel | 18:57 |
sdake__ | all caught up | 18:58 |
adrian_otto | ok | 18:58 |
apmelton | adrian_otto: docker_volume_size would be a good fit for that model | 18:59 |
sdake__ | think we need to have this discussion in the team meeting | 18:59 |
sdake__ | and block the review until then | 18:59 |
sdake__ | or alternatively on the ml | 18:59 |
sdake__ | i guess I hsould be more on the ball with my reviews, I should have noticed this earlier | 19:00 |
adrian_otto | so sdake__, We will. My question for you is why did you raise the concern of representing os, platform, and coe | 19:00 |
*** diga_ has quit IRC | 19:00 | |
sdake__ | the template is going to need the inputs to do the right thing | 19:00 |
sdake__ | for example, how do we launch a swarm vs k8s now? | 19:00 |
adrian_otto | so one approach is to use a ClusterModel | 19:00 |
sdake__ | you mean a new api type? | 19:01 |
sdake__ | i am not in favor of new apis for this :) | 19:01 |
adrian_otto | another approach would be to have optional parameters you pass to the bay create call to get behaviour that is either default (operator settable) or alternate (if available). | 19:01 |
sdake__ | i think we dont want to allow users to choose that unless the cloud operator has set the baymodels up | 19:02 |
sdake__ | could be a security concern | 19:02 |
sdake__ | for example, you could easily root a box with swarm using a bindmount | 19:02 |
sdake__ | -v /etc:/etc | 19:02 |
sdake__ | vi /etc/shadow | 19:02 |
sdake__ | profit | 19:02 |
adrian_otto | but if that is running on a host that's owned by that same tenant, maybe getting root is fine. | 19:02 |
sdake__ | except for the ironic case :) | 19:03 |
adrian_otto | why? | 19:03 |
sdake__ | th emore I think about ironic the more concerned I am about security concerns | 19:03 |
apmelton | sdake__: isn't that an ironic concern though? | 19:03 |
sdake__ | because ify ou root a baremetal machine, you ahve full access to the network infrastructure | 19:03 |
sdake__ | and could spread your rooting around | 19:04 |
adrian_otto | that depends if you are running stock firmware or not | 19:04 |
sdake__ | outside of your tenant | 19:04 |
*** raginbajin has quit IRC | 19:04 | |
*** rcleere_away has quit IRC | 19:04 | |
*** rcleere has joined #openstack-containers | 19:04 | |
*** raginbajin has joined #openstack-containers | 19:04 | |
sdake__ | lets just all admit ironic is a challenge for security in our model :) | 19:04 |
sdake__ | and we need to make sure its secure | 19:04 |
sdake__ | i agree tho, vms, who cares | 19:04 |
sdake__ | the tenant made em, if they want to root em, that is their choice | 19:04 |
adrian_otto | apmelton made a good point… that should be an ironic concern | 19:05 |
adrian_otto | as I see it, we can create containers on a nova/ironic instance | 19:05 |
sdake__ | we will need to install an l3 agent on the ironic host as a container | 19:05 |
adrian_otto | and the cloud operator can decide which instance types make sense for them | 19:05 |
adrian_otto | why should we give special consideration to the security characteristics of one instance or another? | 19:06 |
sdake__ | because vms offer special protection that baremetal does not | 19:07 |
adrian_otto | I don't mean to be flippant here, I want to be sure that our interest is well considered before we head a direction | 19:07 |
sdake__ | you can't break out of a vm into the bare metal host | 19:07 |
sdake__ | atleast I don't think its possible | 19:07 |
adrian_otto | it's possible, just really hard | 19:07 |
sdake__ | you could break out of a container onto a bare metal host | 19:07 |
apmelton | sdake__: if a user wanted root on an ironic instance, couldn't they just go straight to nova? | 19:08 |
adrian_otto | yes, and the container breakout is easier. | 19:08 |
sdake__ | I dont know enough about ironic to answer apmelton | 19:08 |
*** Marga_ has quit IRC | 19:08 | |
apmelton | at least in the rackspace model, users who request ironic instances from nova get root on them | 19:08 |
sdake__ | i think realistically if people want to deploy ironic instances, they should be in a separate network vlan | 19:09 |
*** Marga_ has joined #openstack-containers | 19:09 | |
apmelton | after they're done the entire host is wiped, and anything re-flashable is re-flashed | 19:09 |
adrian_otto | the whole point of ironic is to give cloud tenants the ability to provision bare metal instances, and presumably so those tenants can use those instances at any access level. | 19:09 |
sdake__ | from the rest of the cloud | 19:09 |
sdake__ | i see, that makes sense | 19:09 |
sdake__ | maybe i'm just being paranoid | 19:10 |
sdake__ | I hear people bitch about security + containers 24/7 | 19:10 |
adrian_otto | so my attitude about instance types is that Magnum could be totally agnostic to them | 19:10 |
adrian_otto | and if you want maximum security isolation between containers, you can place them into nova instances in a 1:1 ratio. | 19:11 |
sdake__ | ya that works but not good density | 19:11 |
sdake__ | containers do not offer a security solution | 19:11 |
adrian_otto | as long as the tenant_id of the nova instances matches the tenant_id of the magnum resources, then pack them in. | 19:11 |
sdake__ | apmelton if you ahve some cycles there is a review outstanding that needs testing | 19:12 |
apmelton | sdake__: link? | 19:12 |
sdake__ | https://review.openstack.org/#/c/166662/ and its 3 dependent patches | 19:13 |
sdake__ | only +2 if it works :) | 19:13 |
adrian_otto | apmelton can +1 if it works. | 19:13 |
sdake__ | oh right | 19:13 |
sdake__ | my dev ennvironment where i run magnum is busted - nothing python works :( | 19:14 |
apmelton | not core, yet | 19:14 |
adrian_otto | ok, I'm putting the ClusterModel discussion on tomorrows team meeting agenda | 19:14 |
sdake__ | its a bad sign when your github streak is 7 days :( | 19:15 |
apmelton | I saw the thread about moving heat to stackforge, what's the timeline on that? | 19:16 |
apmelton | heat-containers* | 19:16 |
adrian_otto | that's in review | 19:16 |
sdake__ | on patch revision 8 last I checked :) | 19:17 |
sdake__ | heat-coe-templates is the repo | 19:17 |
apmelton | which project is that review on? | 19:17 |
sdake__ | project-config | 19:18 |
sdake__ | https://review.openstack.org/#/c/164806/ | 19:19 |
apmelton | thanks! | 19:19 |
sdake__ | adrian_otto we got another +1 on the openstack namespace change | 19:19 |
adrian_otto | oh, good! | 19:21 |
adrian_otto | mordred. That's promising. | 19:23 |
adrian_otto | https://review.openstack.org/161080 | 19:23 |
adrian_otto | for those wondering what we are talking about | 19:24 |
*** Marga_ has quit IRC | 19:26 | |
adrian_otto | the OpenStack TC meets tomorrow at 20:00 UTC, and magnum is on the Agenda: https://wiki.openstack.org/wiki/Meetings/TechnicalCommittee | 19:28 |
*** bond__ has joined #openstack-containers | 19:30 | |
*** dboik has quit IRC | 19:31 | |
*** dboik has joined #openstack-containers | 19:32 | |
*** prad has quit IRC | 19:32 | |
sdake__ | noon and children still sleeping | 19:33 |
sdake__ | spring break ftw | 19:33 |
*** oro_ has joined #openstack-containers | 19:39 | |
*** sdake has joined #openstack-containers | 19:46 | |
*** sdake__ has quit IRC | 19:49 | |
adrian_otto | wow, those kids must have been up late? | 19:57 |
*** Marga_ has joined #openstack-containers | 20:07 | |
*** daneyon has left #openstack-containers | 20:41 | |
adrian_otto | sdake/sdake_: yt> | 21:00 |
adrian_otto | ? | 21:00 |
sdake | shoot | 21:01 |
*** vilobhmm11 has quit IRC | 21:01 | |
*** vilobhmm1 has joined #openstack-containers | 21:02 | |
sdake | no they are just lazy | 21:02 |
*** vilobhmm1 has quit IRC | 21:02 | |
sdake | adian_otto ^^ | 21:02 |
*** vilobhmm1 has joined #openstack-containers | 21:02 | |
*** vilobhmm11 has joined #openstack-containers | 21:09 | |
*** vilobhmm1 has quit IRC | 21:13 | |
*** Marga_ has quit IRC | 21:18 | |
*** prad has joined #openstack-containers | 21:20 | |
*** vilobhmm11 has quit IRC | 21:20 | |
*** vilobhmm1 has joined #openstack-containers | 21:20 | |
*** sdake__ has joined #openstack-containers | 21:22 | |
*** sdake has quit IRC | 21:26 | |
*** dboik_ has joined #openstack-containers | 21:27 | |
*** dboik has quit IRC | 21:30 | |
*** EricGonczer_ has quit IRC | 21:38 | |
*** sdake__ has quit IRC | 21:54 | |
*** bond__ has quit IRC | 21:56 | |
*** sdake has joined #openstack-containers | 21:57 | |
*** adrian_otto has quit IRC | 22:01 | |
*** Marga_ has joined #openstack-containers | 22:06 | |
*** julim has quit IRC | 22:09 | |
openstackgerrit | Andrew Melton proposed stackforge/magnum: Update pod_delete call for new log message https://review.openstack.org/167022 | 22:10 |
*** unicell1 has joined #openstack-containers | 22:21 | |
*** unicell has quit IRC | 22:21 | |
sdake | interesting just got a recruitment offer for virtual synchrony job | 22:21 |
sdake | not often that happens :) | 22:22 |
sdake | too bad I can't do a brain transfer of all I know there. | 22:22 |
sdake | apmelton did you test that patch with the latest fedora-atomi-2 image? | 22:23 |
sdake | the one I posted earlier | 22:24 |
*** oro_ has quit IRC | 22:29 | |
*** oro has quit IRC | 22:30 | |
*** dboik_ has quit IRC | 22:30 | |
*** dboik has joined #openstack-containers | 22:30 | |
*** prad has quit IRC | 22:31 | |
*** dboik has quit IRC | 22:32 | |
*** Marga_ has quit IRC | 23:02 | |
*** Marga_ has joined #openstack-containers | 23:03 | |
*** yuanying has joined #openstack-containers | 23:22 | |
*** EricGonczer_ has joined #openstack-containers | 23:22 | |
*** thomasem has quit IRC | 23:26 | |
*** thomasem has joined #openstack-containers | 23:33 | |
*** hblixt has joined #openstack-containers | 23:34 | |
sdake | need a core to +a https://review.openstack.org/#/c/166661/ plz | 23:34 |
sdake | looks like it has received sufficient testing | 23:35 |
yuanying | ok, yesterday my environment is busted... | 23:35 |
sdake | np | 23:35 |
sdake | and https://review.openstack.org/#/c/166662/ | 23:36 |
openstackgerrit | Merged stackforge/magnum: Merge heat-kubernetes pull request 14 https://review.openstack.org/166661 | 23:39 |
openstackgerrit | Merged stackforge/magnum: Merge heat-kubernetes pull request 15 https://review.openstack.org/166687 | 23:39 |
openstackgerrit | Merged stackforge/magnum: Merge heat-kubernetes pull request 16 https://review.openstack.org/166688 | 23:39 |
sdake | new streak record on github - 8 days | 23:41 |
* sdake dies of exhaustion | 23:41 | |
*** EricGonc_ has joined #openstack-containers | 23:45 | |
*** adrian_otto has joined #openstack-containers | 23:47 | |
*** EricGonczer_ has quit IRC | 23:48 | |
*** adrian_otto has quit IRC | 23:52 | |
*** mfalatic has joined #openstack-containers | 23:53 | |
*** dims_ has joined #openstack-containers | 23:58 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!