*** esberglu has quit IRC | 00:04 | |
*** edmondsw has joined #openstack-powervm | 00:20 | |
*** edmondsw has quit IRC | 00:25 | |
*** thorst has joined #openstack-powervm | 00:29 | |
*** jwcroppe has joined #openstack-powervm | 00:29 | |
*** thorst has quit IRC | 00:33 | |
*** jwcroppe has quit IRC | 00:33 | |
*** jwcroppe has joined #openstack-powervm | 00:33 | |
*** esberglu has joined #openstack-powervm | 00:40 | |
*** esberglu has quit IRC | 00:45 | |
*** thorst has joined #openstack-powervm | 00:59 | |
*** thorst has quit IRC | 01:07 | |
*** svenkat has joined #openstack-powervm | 01:09 | |
*** edmondsw has joined #openstack-powervm | 02:08 | |
*** svenkat has quit IRC | 02:10 | |
*** edmondsw has quit IRC | 02:13 | |
*** edmondsw has joined #openstack-powervm | 03:57 | |
*** apearson has joined #openstack-powervm | 03:59 | |
*** edmondsw has quit IRC | 04:01 | |
*** apearson has quit IRC | 04:04 | |
*** apearson has joined #openstack-powervm | 04:04 | |
*** tjakobs has joined #openstack-powervm | 04:43 | |
*** chhavi has joined #openstack-powervm | 04:49 | |
*** esberglu has joined #openstack-powervm | 04:53 | |
*** esberglu has quit IRC | 04:58 | |
*** apearson has quit IRC | 05:00 | |
*** tjakobs_ has joined #openstack-powervm | 05:02 | |
*** tjakobs has quit IRC | 05:04 | |
*** thorst has joined #openstack-powervm | 05:04 | |
*** thorst has quit IRC | 05:10 | |
*** tjakobs_ has quit IRC | 05:18 | |
*** edmondsw has joined #openstack-powervm | 05:45 | |
*** edmondsw has quit IRC | 05:49 | |
*** thorst has joined #openstack-powervm | 06:29 | |
*** thorst has quit IRC | 06:33 | |
*** esberglu has joined #openstack-powervm | 06:41 | |
*** tjakobs_ has joined #openstack-powervm | 06:45 | |
*** esberglu has quit IRC | 06:46 | |
*** tjakobs_ has quit IRC | 07:29 | |
*** edmondsw has joined #openstack-powervm | 07:33 | |
*** edmondsw has quit IRC | 07:37 | |
*** esberglu has joined #openstack-powervm | 08:29 | |
*** thorst has joined #openstack-powervm | 08:30 | |
*** esberglu has quit IRC | 08:34 | |
*** thorst has quit IRC | 08:34 | |
*** k0da has joined #openstack-powervm | 09:07 | |
*** dwayne has quit IRC | 09:16 | |
*** dwayne_ has joined #openstack-powervm | 09:21 | |
*** edmondsw has joined #openstack-powervm | 09:22 | |
*** mdrabe has quit IRC | 09:23 | |
*** edmondsw has quit IRC | 09:25 | |
*** mdrabe has joined #openstack-powervm | 09:26 | |
*** thorst has joined #openstack-powervm | 09:58 | |
*** thorst has quit IRC | 10:06 | |
*** thorst has joined #openstack-powervm | 10:07 | |
*** thorst has quit IRC | 10:11 | |
*** esberglu has joined #openstack-powervm | 10:17 | |
*** esberglu has quit IRC | 10:22 | |
*** edmondsw has joined #openstack-powervm | 11:09 | |
*** edmondsw has quit IRC | 11:13 | |
*** svenkat has joined #openstack-powervm | 11:42 | |
*** smatzek has joined #openstack-powervm | 11:50 | |
*** esberglu has joined #openstack-powervm | 12:05 | |
*** esberglu_ has joined #openstack-powervm | 12:08 | |
*** esberglu has quit IRC | 12:10 | |
*** edmondsw has joined #openstack-powervm | 12:13 | |
*** thorst has joined #openstack-powervm | 12:16 | |
*** esberglu_ has quit IRC | 12:32 | |
*** esberglu_ has joined #openstack-powervm | 12:56 | |
*** esberglu_ has quit IRC | 12:58 | |
*** esberglu_ has joined #openstack-powervm | 12:59 | |
*** apearson has joined #openstack-powervm | 13:05 | |
*** jwcroppe has quit IRC | 13:08 | |
*** kylek3h has joined #openstack-powervm | 13:28 | |
esberglu_ | edmondsw: efried: Increased the discover hosts timeout for CI in 5609 | 13:35 |
---|---|---|
esberglu_ | That should hopefully alleviate some of the CI failures | 13:35 |
esberglu_ | Also made a new etherpad | 13:35 |
esberglu_ | https://etherpad.openstack.org/p/powervm_tempest_failures | 13:35 |
esberglu_ | For tracking failing tempest tests | 13:35 |
*** jwcroppe has joined #openstack-powervm | 13:52 | |
*** apearson has quit IRC | 13:59 | |
*** apearson has joined #openstack-powervm | 14:27 | |
*** esberglu_ has quit IRC | 14:44 | |
*** esberglu has joined #openstack-powervm | 14:44 | |
esberglu | efried: Looks like the marker LU uploads are freezing in CI | 14:55 |
esberglu | http://184.172.12.213/00/486700/2/check/nova-out-of-tree-pvm/fd429a4/ | 14:55 |
efried | Am I a little frightened that we still pass 745 tests when that happens? | 14:59 |
*** tjakobs_ has joined #openstack-powervm | 15:00 | |
efried | esberglu which neo? | 15:00 |
efried | or, really, any neo in the cluster will do | 15:00 |
esberglu | efried: neo6 | 15:01 |
esberglu | This is what is causing the really long CI runs that fail | 15:01 |
esberglu | Seems to be localized to the ssp cluster with neo 6, 7, 8, and 11 | 15:01 |
esberglu | efried: Nvm, seeing it on other ssp groups as well | 15:03 |
efried | esberglu If you clear out the marker, does the world go back to sanity? | 15:04 |
esberglu | efried: I wonder if the vm cleaner scripts are broken | 15:06 |
efried | Were the vm cleaner scripts supposed to scrub marker LUs too? | 15:07 |
esberglu | I think post_stack_vm_cleaner.py does (or did?). I'm pulling up the script now | 15:08 |
efried | esberglu btw, I'm still not stacking. | 15:10 |
esberglu | efried: neo-os-ci/ci-ansible/roles/ci-management/templates/scripts/post_stack_vm_cleaner.py | 15:11 |
esberglu | I think the remove_backing_storage function there would also clear out marker lu's or no? | 15:11 |
esberglu | efried: With uwsgi or mod_wsgi | 15:13 |
efried | mod_wsgi. With uwsgi, glance-api wouldn't start. With mod_wsgi, it started, but failed on image creation. | 15:13 |
efried | Seems like a timing thing tbh, cause I can run the failing command successfully right after stack fails. | 15:14 |
esberglu | efried: You could just add a wait before that command to confirm | 15:15 |
esberglu | I think edmondsw's system was hitting that sometimes | 15:15 |
efried | esberglu remove_backing_storage looks like it'll only remove LUs associated with SCSI mappings on LPARs. Which will never include marker LUs. | 15:15 |
efried | Oh, I wonder if I need to bust down my SMT level. I don't think I did that. | 15:15 |
esberglu | efried: Oh yeah didn't think of that | 15:16 |
esberglu | Worth a shot | 15:16 |
efried | sho | 15:16 |
efried | remind me how? | 15:17 |
edmondsw | what is a marker lu? | 15:17 |
efried | edmondsw Long story, hold on a tick | 15:17 |
edmondsw | sure | 15:17 |
esberglu | sudo ppc64_cpu --smt=off | 15:17 |
efried | found it - sudo ppc64_cpu --smt=off | 15:17 |
efried | did it halfway through stacking, we'll see if it takes ;-) | 15:17 |
esberglu | edmondsw: Did you still want me to walk you through SSP setup at some point? | 15:18 |
efried | edmondsw We've got some clever code in our SSP disk driver that coordinates image uploads across threads, including on hosts that otherwise don't know about each other. | 15:18 |
edmondsw | esberglu yes... how long do you think that'll take? | 15:18 |
efried | edmondsw Do you have an IP.com account? | 15:19 |
edmondsw | efried doesn't ring a bell | 15:19 |
esberglu | edmondsw: I'd say 5-20 minutes. 30 max | 15:20 |
esberglu | *15-20 | 15:20 |
efried | edmondsw emailed you | 15:20 |
edmondsw | efried tx | 15:21 |
edmondsw | esberglu doesn't sound too bad... you free now? | 15:21 |
esberglu | edmondsw: Sure | 15:21 |
* efried listens in | 15:21 | |
esberglu | 1) Log into the backing SAN gui | 15:24 |
edmondsw | check | 15:24 |
esberglu | 2) Go to the volumes section in the menu on the left of the screen (3rd one down) and select volumes by host | 15:25 |
edmondsw | check | 15:25 |
esberglu | 3) Find your system, should be neodev<neo#> | 15:26 |
esberglu | neodev5 IIRC for you | 15:26 |
edmondsw | yep, I'm there | 15:26 |
esberglu | 4) Create volume | 15:27 |
esberglu | thin provision, mdiskgrp0 | 15:27 |
edmondsw | well, I'm at neodev5-1... there's also neodev5-2... does it matter? | 15:27 |
esberglu | edmondsw: Is this a single vios setup or dual? | 15:27 |
edmondsw | whatever the novalink installer gave me | 15:28 |
*** apearson has quit IRC | 15:28 | |
esberglu | IIRC correctly the -2 is for when you have dual vioses. I've only done this for single, not sure if it makes a differece | 15:28 |
esberglu | thorst probably knows | 15:28 |
esberglu | Anyway now you need 2 volumes, 1 meta and 1 data | 15:29 |
edmondsw | size? | 15:29 |
esberglu | Typically I do 1G for meta and 250G for data per system | 15:29 |
efried | You need to assign the *same* volume to *all* VIOSes that need to participate in the cluster. | 15:29 |
esberglu | So the CI SSPs have 4 systems (1G meta, 4x250G data) | 15:30 |
efried | If you have dual VIOS, it'd be a really good idea to have both participating, because I believe we try to map from both, in case one breaks. | 15:30 |
edmondsw | esberglu efried I think I have dual | 15:33 |
esberglu | edmondsw: I'm only able to log into one of your vioses but can ping both vios ips (according to the neo hardware page) | 15:33 |
edmondsw | hmm | 15:33 |
esberglu | I haven't done much with dual vios, maybe that's normal | 15:33 |
esberglu | Well anyway we can just start with 5-1 and then add 5-2 if we confirm dual | 15:34 |
*** k0da has quit IRC | 15:35 | |
edmondsw | ok, I have meta and data volumes created and mapped to the host | 15:36 |
edmondsw | 5-1 | 15:36 |
esberglu | edmondsw: Okay log into vios 1 | 15:37 |
esberglu | And run lspv | 15:37 |
esberglu | You should see hdisk0 and hdisk1 | 15:37 |
*** apearson has joined #openstack-powervm | 15:37 | |
edmondsw | pvmctl shows 2 vios but vios2 is "busy" | 15:39 |
thorst | hdisk0 and 1 may be the SAS drives. | 15:39 |
thorst | you have to do some inspection to figure out if SAS or FC | 15:39 |
thorst | mkvterm into vios2 to figure out why busy (I bet it didn't boot properly) | 15:39 |
esberglu | thorst: Yeah I'm just showing him that so when he runs lspv after he sees the difference | 15:40 |
thorst | esberglu: ahh, gotcha | 15:40 |
edmondsw | I see hdisk0 and 1 | 15:41 |
esberglu | okay now run | 15:41 |
esberglu | oem_setup_env | 15:41 |
esberglu | then | 15:41 |
esberglu | cfgmgr | 15:41 |
edmondsw | done | 15:43 |
edmondsw | couple errors about can't find child device | 15:43 |
esberglu | edmondsw: Still running or complete? | 15:43 |
edmondsw | complete | 15:43 |
thorst | that's normal for FC ports that aren't plugged in | 15:43 |
edmondsw | coo | 15:44 |
thorst | most of the cards have multiple ports, only one is plugged in because I didn't have enough switches for everything | 15:44 |
thorst | and FC is...you know...expensive :-) | 15:44 |
edmondsw | really ;) | 15:44 |
esberglu | thorst: The new disks should be showing up at this point though and they aren't | 15:44 |
thorst | so I came in half way through. Is the zoning done? The disks are out on the v7k? | 15:45 |
esberglu | thorst: Yep | 15:45 |
edmondsw | I created the disks on the SVC and said "create and map to host" | 15:45 |
edmondsw | meta and data | 15:45 |
thorst | k...sounds right. Were there other disks on the host? | 15:45 |
edmondsw | no | 15:46 |
esberglu | thorst: Nope. There were a bunch of gpfs ones previously but I deleted all of those last week | 15:46 |
thorst | PM me the v7k IP you're using | 15:46 |
thorst | o, ok | 15:46 |
thorst | this is neo5. | 15:46 |
esberglu | yep | 15:46 |
thorst | hmm...well I know zoning works there. | 15:46 |
thorst | so this is not installed in SDN or SDE mode...straight traditional mode? | 15:46 |
edmondsw | right | 15:46 |
esberglu | thorst: Yeah I've done this for CI quite a bit now, but that was always single vios. | 15:47 |
thorst | try mapping the disks to the neo-dev-5-2 | 15:47 |
esberglu | Would you just add the second vios to the cluster the same as you would add a second neo? | 15:47 |
thorst | sometimes vios1 maps to neo-dev-5-2 in the v7k...because its really mapped to a card | 15:47 |
thorst | and how the cards get assigned to the VIOSes gets...odd | 15:48 |
thorst | esberglu: yep! | 15:48 |
*** efried has quit IRC | 15:49 | |
esberglu | edmondsw: To map to another host, just highlight the two disks and click the actions button | 15:49 |
esberglu | And there is a map to host option | 15:49 |
edmondsw | k | 15:49 |
edmondsw | done... so now run cfgmgr again? | 15:50 |
esberglu | Yep | 15:50 |
esberglu | If this worked and you run lspv again after you should see the new ones | 15:50 |
esberglu | lspv -size to see which is meta and which is data | 15:52 |
edmondsw | cfgmgr output it the same, but I do see hdisk2-3 now with lspv | 15:52 |
edmondsw | should I exit out of oem_setup_env? | 15:52 |
esberglu | yeah you can | 15:53 |
esberglu | Now all that's left is that cluster creation | 15:53 |
esberglu | cluster -create -clustername <clustername> -repopvs <meta_disk> -sp <sp_name> -sppvs <data_disk(s)> | 15:54 |
esberglu | clustername and sp_name are whatever you want to name them | 15:54 |
esberglu | ci_cluster and ci_ssp for CI | 15:54 |
edmondsw | running | 15:55 |
edmondsw | reports success | 15:56 |
esberglu | cluster -list | 15:56 |
esberglu | and | 15:56 |
esberglu | cluster -status | 15:57 |
esberglu | to make sure all is as expected | 15:57 |
edmondsw | lists and says ok | 15:57 |
esberglu | Perfect | 15:57 |
edmondsw | thanks! | 15:57 |
esberglu | Now once you get the 2nd vios figured out | 15:57 |
esberglu | - map to host in san (done already) | 15:57 |
esberglu | - run cfgmgr | 15:57 |
esberglu | - run "cluster -addnode -clustername -hostname <vios2hostname>" | 15:58 |
edmondsw | on that note... thorst I did mkvterm and it was sitting at a login prompt. I put "padmin" and it is now periodically spewing INIT: failed write of utmp entry: " cons" | 15:58 |
edmondsw | never asked for password | 15:59 |
esberglu | I think that the addnode can be run from either vios, but you might have to be on the one you created the cluster with | 15:59 |
thorst | edmondsw: that looks like it installed wrong...or the disk it installed into went haywire | 15:59 |
edmondsw | thorst do I need to do a whole new novalink install, or how do I fix that? | 16:00 |
*** efried has joined #openstack-powervm | 16:02 | |
thorst | edmondsw: well, first, do you care? If you do, then I'd just do a net install of the VIOS manually | 16:03 |
thorst | if it were me... | 16:03 |
thorst | but if you've never done that, it can be daunting | 16:03 |
thorst | and also, your novalink won't have redundancy (the NL installer makes things redundant by hosting itself out of a dual VIOS) | 16:04 |
edmondsw | maybe I don't care at the moment :) | 16:04 |
thorst | a reinstall is typically the best solution | 16:04 |
thorst | just gets you away from all the gork | 16:04 |
edmondsw | yeah, that's what I suspected | 16:04 |
thorst | I usually map FC devices to the hosts pre install | 16:05 |
thorst | have the installer install to those | 16:05 |
thorst | (FC is more reliable than those old SAS disks) | 16:05 |
thorst | then once install is done, add in the SSP disks | 16:05 |
thorst | and then we're good. | 16:05 |
*** jwcroppe has quit IRC | 16:20 | |
*** miltonm has quit IRC | 17:03 | |
*** miltonm has joined #openstack-powervm | 17:19 | |
esberglu | efried: I have some questions about the LU issues | 17:50 |
efried | Talk to me, Goose. | 17:50 |
esberglu | Okay you want to log into neo7 and run "pvmctl lu list" | 17:50 |
esberglu | You'll see there are tons of LUs | 17:50 |
esberglu | A 30G one for every instance in the ssp group | 17:51 |
esberglu | And then all of the marker LUs | 17:51 |
esberglu | I think that the marker LU's for each instance are getting cleaned up after the run, but does having that many around simultaneously have any implications | 17:53 |
efried | Well, we could run out of space in the SSP. | 17:53 |
efried | though that doesn't seem to be an issue here. | 17:53 |
efried | esberglu I don't actually see any markers | 17:54 |
esberglu | Wait what are the 0.1G LUs | 17:55 |
esberglu | Are markers only the ones that start with part | 17:55 |
efried | yuh | 17:55 |
esberglu | efried: There was one of those around when we were talking earlier and I deleted it | 17:56 |
efried | Those 0.1 ones might be reaaaalllly old, from when we were using (or trying to use) a 100MB image. | 17:56 |
efried | Want me to clean 'em up? | 17:56 |
esberglu | efried: I think they are for the instances that get spawned during the tempest tests | 17:56 |
esberglu | I'm pretty sure they are all from live instances | 17:56 |
efried | But why would we ever create boot disks of 100MB? | 17:56 |
esberglu | We use that 100MB all zeros image still I believe | 17:57 |
efried | for what? | 17:57 |
esberglu | For the default image that tempest uses for spawns | 17:59 |
efried | I didn't think we used that. | 18:00 |
efried | Anyway, want me to get rid of the ones that are listed as not in use? | 18:00 |
efried | (means they don't have a scsi mapping associated with them.) | 18:00 |
*** k0da has joined #openstack-powervm | 18:02 | |
esberglu | efried: Sure. We should add that to the periodic_vm_cleaner script long term | 18:02 |
efried | k, looks like this: | 18:03 |
efried | pvmctl lu list -d name type in_use --hide-label --field-delim ' ' | awk '$2 == "VirtualIO_Disk" && $3 == "False" {print $1}' | while read n; do pvmctl lu delete -i name=$n; done | 18:03 |
efried | Getting some failures. | 18:03 |
efried | Which probably means I'm trying to delete ones for runs that are actually happening. | 18:03 |
efried | I did not restrict deletion to the 0.1GB ones. Mebbe I should have :) | 18:04 |
efried | so yeah, this is probably going to cause some CI failures | 18:09 |
esberglu | Eh oh well | 18:10 |
esberglu | This doesn't explain the marker lu issue though. Remind me why having a bad marker LU around breaks things? | 18:12 |
esberglu | Is it because the upload hangs and then everything else just sits waiting for it to finish? | 18:14 |
esberglu | efried: Is there not a good way to check for an upload hanging? | 18:15 |
efried | Yeah, the marker LU is how the guy doing the upload tells everyone else it's doing the upload, and they a) shouldn't do an upload, b) need to wait for the upload to finish. | 18:16 |
efried | If that process (the one doing the upload) gets killed prematurely, the marker LU doesn't get deleted. | 18:17 |
efried | So everyone thinks there's still an upload going, and they wait forever. | 18:17 |
efried | esberglu In order to tell that happened, you would have to find the image LU corresponding to the marker and somehow figure out if bytes are still going to it. | 18:18 |
efried | Cause I don't know of a way to backtrace from an LU to figure out which process/host created it. | 18:18 |
esberglu | efried: So in this case it probably happened when the network was going haywire | 18:18 |
esberglu | Which explains why we've been seeing issues since then | 18:18 |
efried | The only other way would be to scour the compute logs on all the hosts looking for the creation of that marker LU. | 18:19 |
efried | esberglu That LU scrub loop finished. There's still a bunch of 0.1GB ones that are in use. I think some of them may actually have duplicate names, which is funky. | 18:20 |
efried | yup | 18:21 |
efried | name=boot_pvm11_tempest_Del_96584eda,udid=278b4f23be6ca211e78003000040f2e95d28b49d00052918a1578db40261a7cd99 | 18:21 |
efried | name=boot_pvm11_tempest_Del_96584eda,udid=278b4f23be6ca211e78003000040f2e95d7d974ba05a0214f53da7e1e553c8ffb3 | 18:21 |
efried | Rare, but (clearly) not impossible. | 18:21 |
esberglu | efried: What if we had a cron job that recorded the marker lus. And if the same one marker LU is around for too long assume it's no good and delete it | 18:21 |
efried | that would be okay. | 18:22 |
esberglu | efried: Anything potentially bad that could come of the duplication? | 18:22 |
efried | nah, not really. Pretty sure we use udids anywhere that matters. | 18:23 |
efried | I manually deleted both of those guys. | 18:23 |
efried | Those 100MB images are backed to one called base_os | 18:27 |
efried | Do we still have a base_os image that's 100MB?? | 18:27 |
edmondsw | those dup names aren't for marker LUs, though, are they? Cause that would be bad | 18:27 |
edmondsw | (see, I read what you sent efried) | 18:27 |
efried | edmondsw No, they're for reglr images. | 18:27 |
edmondsw | coo | 18:27 |
esberglu | efried: We still use the 100mb zeros one | 18:28 |
edmondsw | didn't look like the example was a marker | 18:28 |
efried | A scintillating read, não é edmondsw ? | 18:28 |
edmondsw | I was so amped! | 18:28 |
edmondsw | what does a marker LU's name look like? | 18:29 |
efried | IIRC, it's the same as the image being uploaded, prefixed with 'part{udid[:8]}' or similar. | 18:30 |
efried | sorry, uuid, not udid | 18:30 |
efried | The uuid bit is used to break ties if multiple threads try to do the same upload at once. | 18:30 |
edmondsw | yep, that was in the doc | 18:31 |
edmondsw | the tie-breaking bit | 18:31 |
efried | Where, theoretically, you could get collisions. And that could indeed be bad-ish. But actually I think the result would just be more than one of the same image LU. | 18:31 |
efried | In any case, I'm willing to take that bug report. Should be an almost-never kind of thing. | 18:31 |
edmondsw | yeah, I guess it would just mean 2 threads thinking they won the tiebreaker... | 18:33 |
efried | right. But only if there's not a *third* thread with a lower uuid at the same time :) | 18:33 |
edmondsw | I was thinking it might mean no thread thought they won... so everyone would wait forever... but that seems doubtful | 18:33 |
edmondsw | I trust it was written better than that | 18:34 |
efried | edmondsw You can look at the code if you want to trace that case. I'd be interested to know what happens there. | 18:34 |
*** k0da has quit IRC | 18:34 | |
edmondsw | nah, higher priorities atm | 18:34 |
efried | And "written better" is relative. I doubt we spent much time on conditions like "what happens if the first 32 bits of two UUIDs collide". | 18:35 |
edmondsw | lol | 18:35 |
efried | And I know for sure that we explicitly discounted full UUID collisions. | 18:35 |
efried | ...as "sure, let it fail". | 18:35 |
efried | edmondsw https://github.com/powervm/pypowervm/blob/master/pypowervm/tasks/cluster_ssp.py#L155 in case priorities change, or you can't help yourself :) | 18:37 |
edmondsw | tempter... | 18:37 |
efried | edmondsw Yes, it looks like we'll actually upload the same image twice. | 18:39 |
efried | ...if the markers conflict, and they're the lowest-sorting. | 18:39 |
edmondsw | yep, just reached the same conclusion | 18:40 |
efried | Keep in mind that this image LU upload only happens once per SSP per image (which is kinda the whole point). So that's rare to begin with. And at that time, multiple threads - which are almost certainly on separate servers - would have to be trying to upload (presumably separate copies of) the same image at close enough to the same time that they both manage to create marker LUs. And then, as if that wasn't rare enoug | 18:41 |
efried | h, they would both have to generate UUIDs whose first 32 bits are the same. | 18:41 |
*** chhavi has quit IRC | 19:11 | |
*** k0da has joined #openstack-powervm | 19:35 | |
*** miltonm has quit IRC | 19:45 | |
*** miltonm has joined #openstack-powervm | 19:59 | |
edmondsw | efried can you put something about PCI Passthru on the PTG etherpad? https://etherpad.openstack.org/p/nova-ptg-queens | 20:03 |
efried | edmondsw What would I put there? | 20:03 |
edmondsw | think it's too early? | 20:03 |
edmondsw | I thought you might have a better sense for what to put there than I do yet | 20:04 |
efried | edmondsw Well, looking at the way this is just a dump right now, I wouldn't feel terrible about it. | 20:04 |
edmondsw | we can obviously edit as the PTG nears | 20:04 |
efried | Wonder if I should put it under Scheduler/Placement. | 20:05 |
efried | edmondsw Okay, done. See L38-40. | 20:06 |
efried | yah | 20:06 |
edmondsw | tx | 20:06 |
efried | not yet sure if thorst is going | 20:07 |
*** apearson has quit IRC | 20:36 | |
*** apearson has joined #openstack-powervm | 20:37 | |
*** apearson has quit IRC | 20:37 | |
*** smatzek has quit IRC | 20:40 | |
*** thorst has quit IRC | 20:50 | |
*** svenkat has quit IRC | 21:00 | |
*** esberglu has quit IRC | 21:05 | |
*** esberglu has joined #openstack-powervm | 21:29 | |
*** thorst has joined #openstack-powervm | 21:32 | |
*** thorst has quit IRC | 21:37 | |
*** kylek3h has quit IRC | 21:54 | |
*** kylek3h has joined #openstack-powervm | 21:55 | |
*** edmondsw has quit IRC | 21:58 | |
*** kylek3h has quit IRC | 21:59 | |
*** thorst has joined #openstack-powervm | 22:25 | |
*** thorst has quit IRC | 22:26 | |
*** esberglu has quit IRC | 22:34 | |
*** k0da has quit IRC | 22:43 | |
*** edmondsw has joined #openstack-powervm | 23:06 | |
*** edmondsw has quit IRC | 23:11 | |
*** tjakobs_ has quit IRC | 23:22 | |
*** esberglu has joined #openstack-powervm | 23:23 | |
*** thorst has joined #openstack-powervm | 23:27 | |
*** esberglu has quit IRC | 23:28 | |
*** thorst has quit IRC | 23:32 | |
*** efried is now known as efried_zzz | 23:35 | |
*** thorst has joined #openstack-powervm | 23:50 | |
*** thorst has quit IRC | 23:50 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!