*** hyunsikyang has quit IRC | 04:13 | |
*** hyunsikyang has joined #starlingx | 04:14 | |
*** sgw has quit IRC | 04:59 | |
*** sgw has joined #starlingx | 05:16 | |
*** hyunsikyang__ has joined #starlingx | 05:55 | |
*** hyunsikyang has quit IRC | 05:59 | |
*** hyunsikyang has joined #starlingx | 06:02 | |
*** hyunsikyang__ has quit IRC | 06:06 | |
*** hyunsikyang__ has joined #starlingx | 06:07 | |
*** hyunsikyang has quit IRC | 06:09 | |
*** bengates has joined #starlingx | 07:29 | |
*** bengates has quit IRC | 07:30 | |
*** bengates has joined #starlingx | 07:30 | |
*** riuzen has joined #starlingx | 08:55 | |
*** riuzen has quit IRC | 09:12 | |
*** hyunsikyang__ has quit IRC | 11:08 | |
*** hyunsikyang has joined #starlingx | 11:08 | |
*** mthebeau has joined #starlingx | 11:53 | |
*** ijolliffe has joined #starlingx | 11:54 | |
*** slittle1 has quit IRC | 12:38 | |
*** slittle1 has joined #starlingx | 13:07 | |
sgw | Morning all | 13:18 |
---|---|---|
bwensley | morning | 13:24 |
*** swebster has quit IRC | 13:32 | |
*** stampeder has left #starlingx | 13:48 | |
*** openstackstatus has quit IRC | 13:53 | |
*** stampeder has joined #starlingx | 13:54 | |
stampeder | Good morning all. | 13:55 |
*** openstackstatus has joined #starlingx | 13:55 | |
*** ChanServ sets mode: +v openstackstatus | 13:55 | |
sgw | stampeder: how goes your duplex install? | 13:57 |
stampeder | It failed again last night. Same error that it could not save the system configuration. I am currently redoing the install in order to capture the log file. | 13:59 |
stampeder | sgw: Here is the error I am getting at the end of the ansible bootstrap: fatal: [localhost]: FAILED! => {"changed": false, "msg": "Failed to provision initial system configuration."}. I'm sending a pm with the ansible.log file. | 15:22 |
stampeder | sgw: Sent the ansibile.log file to the email address. | 15:29 |
sgw | stampeder: is this a 3.0 failure or the 4/15 ISO? As I mentioned before, master is unstable (especially right now) and the 4/15 failed sanity. | 15:31 |
stampeder | This is definitely 3.0 iso dated 12/19/2019.....###### StarlingX### Release 19.12###OS="centos"SW_VERSION="19.12"BUILD_TARGET="Host Installer"BUILD_TYPE="Formal"BUILD_ID="r/stx.3.0"JOB="STX_BUILD_3.0"BUILD_BY="starlingx.build@cengn.ca"BUILD_NUMBER="21"BUILD_HOST="starlingx_mirror"BUILD_DATE="2019-12-13 02:30:00 +0000"controller-0:~$ Cheers. | 15:33 |
*** slittle1 has quit IRC | 15:35 | |
sgw | It looks like the actual issue is this error | 15:35 |
sgw | "cgtsclient.exc.HTTPInternalServerError: Remote error: AddressNotFoundByName Address could not be found for controller-oam", | 15:35 |
sgw | Are you naming sometime controller-oem ? | 15:38 |
bwensley | Does your localhost.yml have these: external_oam_node_0_address: <OAM-CONTROLLER-0-IP-ADDRESS> | 15:44 |
bwensley | external_oam_node_1_address: <OAM-CONTROLLER-1-IP-ADDRESS> | 15:44 |
sgw | Dyslex'ed the oam vs oem! | 15:46 |
stampeder | sgw: no. | 15:48 |
stampeder | My duplex localhost.yml | 15:50 |
stampeder | system_mode: duplexdns_servers:system_mode: duplex - 8.8.8.8 - 8.8.4.4external_oam_subnet: 10.10.20.0/24external_oam_gateway_address: 10.10.20.1external_oam_floating_address: 10.10.20.181external_oam_node_0_address: 10.10.20.181external_oam_node_1_address: 10.10.20.182admin_username: adminadmin_password: G833ranch%ansible_become_pass: G833ranch%# | 15:50 |
stampeder | Add these lines to configure Docker to use a proxy server# docker_http_proxy: http://my.proxy.com:1080# docker_https_proxy: https://my.proxy.com:1443# docker_no_proxy:# - 1.2.3.4 | 15:50 |
*** bengates has quit IRC | 16:14 | |
stampeder | sgw: Any thoughts on why this is failing? | 17:51 |
sgw | stampeder: sorry, no I was hoping bwensley might chime in, your contents seems correct | 17:53 |
stampeder | sgw: That is why I don't understand what is happening. It's the 3.0 build iso that is posted for download. Hopefully bwensley will have some valuable information. | 17:58 |
sgw | I know that I have deploy 3.0 as duplex multiple times and not seen any issue like this. | 18:03 |
bwensley | I think I see it - you used the same address for external_oam_floating_address and external_oam_node_0_address. | 18:03 |
bwensley | Those need to be different. We should probably detect that and give a proper error. | 18:03 |
sgw | See that's why we need bart! | 18:03 |
sgw | I completely missed that. | 18:03 |
bwensley | You can raise an LP for that if you like. | 18:03 |
stampeder | See, leave it up to a good Western boy to catch that. So Bart, when I did the simplex I made my eno1 both my data port and my oam port. I am assuming that in this case I could make the external_oam_floating_address 10.10.20.171 and leave the external_oam_node_0_address as 10.10.20.181? | 18:18 |
bwensley | That should be OK. | 18:19 |
sgw | Yes, I think that should work, I use 10.10.10.3 / 10.10.10.4 and 10.10.10.5 | 18:19 |
stampeder | OK, great. I'm off to give this a try. Us network guys tend to overthink things sometimes. | 18:21 |
stampeder | sgw: Once I have the duplex up, how will I deploy the Cloud? | 18:50 |
sgw | stampeder: meaning controller-1 , workers and/or storage nodes? | 18:51 |
stampeder | Yes. | 18:51 |
sgw | once you have the c0 provisioned and unlocked, then you put c1 on both the OAM and MGMT networks and it should PXEboot from c0 via the mgmt lan | 18:52 |
sgw | then you would set the personality, this should be in the docs for bare metal "standard" configuration | 18:52 |
stampeder | Where does the deployment manager come in? | 18:53 |
sgw | stampeder: dedicated vs controller storage? Will you have storage nodes or just use the contollers for storage? | 18:53 |
stampeder | I'll just use the controllers for now. I want to keep it as simple as possible at first. | 18:54 |
sgw | I am not following your question, are you planning to do distributed cloud? Remote clouds? | 18:54 |
sgw | so your following this doc: https://docs.starlingx.io/deploy_install_guides/r3_release/bare_metal/controller_storage.html | 18:54 |
stampeder | I want to do what I refer to as a single area (area 0). This will contain controller-0, 1 and a couple of workers. Like a bank with Headquarter hub and a couple of branch locations (spokes). | 18:59 |
*** sgw has quit IRC | 19:05 | |
*** sgw has joined #starlingx | 19:07 | |
sgw | Ok let's try that again pidgin hung on me | 19:07 |
sgw | stampeder: so your spokes would be an AIO or small cloud (duplex with a couple of workers), this is what we call Distributed Cloud: https://docs.starlingx.io/deploy_install_guides/r3_release/distributed_cloud/index.html | 19:08 |
bwensley | Saul - no distributed cloud needed. | 19:09 |
bwensley | We support an AIO-DX config with worker nodes added. | 19:09 |
sgw | No I think he is talked about a HQ / Branch setup, won't that be distcloud? | 19:11 |
stampeder | bwensley: How much work would it take to create small "demo" package with AIO simplex that could support 3 or 4 worker nodes? | 19:11 |
stampeder | sgw: Not sure how you classify distcloud. We call it a Hub and Spoke architecture. A Distributed cloud would be more like Verizon's VCP with many geographic areas and High Availability to my mind. | 19:13 |
sgw | stampeder: I have to defer to someone more knowledgeable about distcloud and deployments. I know my limits and primarily an OS guy! | 19:24 |
stampeder | TASK [bootstrap/bringup-essential-services : Mark the bootstrap as completed] ***changed: [localhost]PLAY RECAP *********************************************************************localhost : ok=253 changed=149 unreachable=0 failed=0 | 19:30 |
stampeder | Colour me a happy camper!! | 19:30 |
sgw | \o/ | 19:30 |
*** thorre has quit IRC | 20:17 | |
*** thorre has joined #starlingx | 20:25 | |
*** stampeder has quit IRC | 20:26 | |
*** stampeder has joined #starlingx | 20:36 | |
*** mthebeau has quit IRC | 20:51 | |
*** sgw has quit IRC | 20:59 | |
*** stampeder has quit IRC | 21:13 | |
*** sgw has joined #starlingx | 21:18 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!