20:01:38 #startmeeting diskimage-builder 20:01:39 Meeting started Thu Oct 13 20:01:38 2016 UTC and is due to finish in 60 minutes. The chair is greghaynes. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:01:40 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:01:42 The meeting name has been set to 'diskimage_builder' 20:01:54 cool, so theres an ageneda \O/ 20:02:05 #topic images for CI 20:02:16 ianw: You probably know what this is about 20:02:42 so i started looking into this, as imgae download is one of the big failure points 20:02:50 but guess what, it's harder than i thought :) 20:03:02 ah, so its about the images we use in our CI 20:03:06 fun 20:03:15 of course we could just keep a static list of images to pre-download ... but that's going to go stale quickly 20:03:32 mm 20:03:32 We cant have a set of env vars that override where we grab images from? 20:03:34 but then i didn't really come up with a clean way to extract that info automatically, since everyone does it a bit different 20:04:03 Is there an example image that is failing? 20:04:16 fedora one is very unreliable 20:04:18 Im thinking that most the image based distros should support something like a DIB_{DISTRO}_BASEIMAGE_URL 20:04:24 and then we set them in CI 20:04:41 so that bit's fine, but how do we figure out what images to cache? 20:04:54 So, I have been meaning to get up a fedora mirror, like we have for centos-7. I wonder if that would get images that are needed? 20:05:13 pabelanger: it's the cloud images, so separate 20:05:19 URL? 20:05:31 ah. I think that will have to be a separate thing. I think we want to cache all the ones we use in CI (sorry super vague) but thatll end up being ubntu/fedora/debian cloudimages 20:05:34 in devstack, we have the ability to get a canonical list of image urls to download 20:05:42 so a cron in infra that updates a url 20:06:04 tools/image_list.sh 20:06:18 oh, neat 20:06:44 my thought first thought is that we make each element add another file of "images required for this element" 20:07:02 the only problem with going that route is our images are about 21GB uncompressed last I looked. A lot of this is git repos (I actually need to see about not caching the debian repos that are a copy of everything) but those cloud iamges are fairly big too 20:07:03 but ... most of the elements downloading images actually do it with a bunch of variable expansion in the name 20:07:06 right, we can make source-repositories work for that 20:07:17 Ya, was going to say, the devstack image cache doesn't scale well 20:07:38 clarkb: yeah ... i'm thinking putting them on mirrors 20:07:43 clarkb: oh were talking about the upstream images - so the ubuntu cloudimage for example 20:07:44 but i still need a mirror-update type script 20:08:08 clarkb: or are you saying our images are alerady too large to add stuff in to? 20:08:09 greghaynes: ya but you are talking about caching them somewhere right? just pointing out the devstack method isn't so great 20:08:12 it wouldn't be to hard to update the current rsync scripts we use for centos I think 20:08:17 putting them on the mirrors would be great 20:08:20 ah, IMO we'd want to put them on somewhere like taballs.o.o 20:08:22 yea 20:08:37 clarkb: yeah, for this, having on local mirrors would be fine 20:08:49 that's what i meant by "caching" i guess :0 20:08:52 gotcha ++ 20:09:01 clarkb: so is it preferred that theres a static list of images to keep on the mirrors or for a script to call out to dib that provides infra with the list of images? 20:09:19 one potential issue I could see with the latter is I imagine infra might want to have some control over what it mirrors 20:09:27 greghaynes: it would have to be a mirror-update type script ... how we do it doesn't matter so much 20:09:41 I think its fine to let dib manage it until that stops working 20:09:41 python script/shell script, etc 20:09:46 ok 20:10:25 ianw: so how about short term we can just statically print out a list from a script in dib (tets/get-test-images.sh or something, similar to devstack) 20:10:33 and then that gives us the interface to make a proper solution 20:10:42 while unwedging our CI in the meantime 20:10:51 greghaynes: yes, that's my thought ... but how do we get the list? 20:10:57 just hard code it 20:11:08 we do have a static list of images we use for a full test run 20:11:24 we do? i mean, that's the important bit 20:12:16 so im thinking that well need to make e.g. fedora use the mirrored image. We'll have to set an env var at a minimum for the mirror'ed url and at the same time we add a image url to the image list script 20:12:33 ok, you're right anyway, i've spent too much time trying to make a fancy generic solution ... something is going to be better than nothing 20:13:05 i can take on making a script to get the image list, and then getting something to put it on an infra mirror 20:13:11 Yea, I think extending source-repositories might end up being the correct long term way but its going to be a lot of work I think 20:13:15 sweet 20:13:43 ok, on the infra side, if we make it sort of sensible interface, other projects that require images they download could hook into it 20:13:54 ++ 20:14:08 is https://muug.ca/mirror/fedora/linux/releases/24/CloudImages/x86_64/images/ the images in question? 20:14:22 for example, if we knew to run "tools/get-images.sh" in a repo, then we could do that on all interested repos, "uniq" the list and then download that 20:14:29 pabelanger: that looks right. we have a url like that for fedora, ubuntu, and debian 20:14:52 ianw: SGTM 20:14:54 okay, so if we mirror fedora, they would get pulled in. 20:15:01 ianw: I can help with the mirror script 20:15:07 pabelanger: yep, fedora mirror bounces you around ... there's one in .nl that is particularly bad 20:15:36 http://git.openstack.org/cgit/openstack-infra/system-config/tree/modules/openstack_project/files/mirror/centos-mirror-update.sh 20:15:42 could be updated for fedora too 20:15:44 or copied 20:16:06 ok, cool. yeah, pkg mirrors are the next thing to figure out after these images 20:16:12 they also fail a lot 20:16:23 right, that would be huge 20:16:33 Ya, been on the list for a while 20:16:39 i think we decided "one var to control mirror location" was a bit of an anti-pattern, because each build type has different mirrors at different points 20:16:52 I started work on a patch for how we could set those mirrors but never finished it 20:16:58 right 20:17:04 so i feel like we can tackle that on a element-by-element (or build by build) type approach 20:17:10 ++ 20:17:44 ok, anything else on that topic/ 20:17:54 action items? 20:18:10 #action ianw to make static list of images to download 20:18:20 can you make actions as a non-chair? 20:18:40 #action ianw to make static list of images to download 20:18:43 in case you cant 20:19:06 #action pabelanger to help with infra script to mirror images list 20:19:23 #topic ansible patch 20:19:28 pabelanger: ^ 20:19:36 or is this for ianw 20:19:41 either is fine 20:19:51 yeah, i see we've discussed it in #dib 20:19:56 #link https://review.openstack.org/#/c/385608/ is the patch in question 20:20:09 my thoughts were pretty much in my comment ... around how we make it more integrated 20:20:38 i have yet to review but i like the idea of eventually generalizing things 20:20:46 ianw: yea, so I think this should work fine in element 20:20:48 So, I could go both ways, how it exists today. Or maybe implement some sort of generic hook to run something outside the chroot 20:21:06 ianw: basically we'd have to make a way to write a 'runner plugin' that lets you run plays in the chroot 20:21:13 but that seems doable 20:22:02 it's also, i guess, fair enough that you want to use existing playbook and just apply it 20:22:41 that's kind of analogous to the infra puppet element 20:22:42 for that case I think you could do this as an element, where I think itll get tricky is when you only want to run the playbook at a certain point in the build 20:23:15 e.g. you could use extra-data.d or cleanup.d to do this pretty easily, but if you want to run during install.d theres no way to do that 20:23:39 Right, the idea would be to take existing playbooks just apply them. Outside of building up a dib element to run them 20:23:48 hrm, and actually it looksl like this just runs at finalise.d time basically 20:23:51 but ya, I haven't tested extra-data.d yet 20:23:54 something I want too try 20:24:41 so IMO I think we should try and nail down the interface we want for this more than anything else 20:25:00 and as for how it gets implemented its not the biggest deal 20:25:06 +1 20:25:08 yep, if we're happy having ansible run in just one phase, then i think i can be done in an element in finalise.d? 20:25:31 if we want anyone to be able to drop an ansible playbook in as basically a peer to a script in an element ... that involves a new runner 20:25:32 for teh interface I think we want what ianw was mentioning - something in elements. So im thinkgin element/ansible/finalise.d... 20:25:56 yeah, that would be basically an ansible peer to dib-run-parts then? 20:26:01 yep 20:26:09 quick question 20:26:26 this means we can still run ansible-playbook outside chroot right? 20:26:41 when I hear elements, I think inside chroot 20:27:03 well, it's more what phases of the elements run inside or outside 20:27:18 pabelanger: yes, youll probably have to implement it similar to how you did (by hacking on the actual dib script) but from there you can just run ansible outside the chroot 20:27:41 pabelanger: http://docs.openstack.org/developer/diskimage-builder/developer/developing_elements.html#phase-subdirectories <- goes through which ones are inside & out 20:27:51 okay 20:28:17 i'm not really sure it is that helpful to run ansible outside the chroot? 20:28:20 Yea, and I think finalise.d is in chroot, so youll want to have diskimage-builder run ansible from outside the chroot for plays taht run inside the chroot 20:28:30 ianw: we don't want ansible installed onto the image 20:28:35 exactly 20:28:40 that is the reason for using ansible_connection=chroot 20:29:09 So, I am not sure finalise.d would work, reading the docs 20:29:11 oh sure, i mean that i'm not sure that you want to use ansible to accomplish things outside the chroot, like cleaning up local filesystem and stuff? 20:29:30 maybe you would, instead of scripts i guess 20:29:32 1 sec 20:29:32 ianw: Yea, I think for now we just leave the outside chroot stuff unimplemented 20:29:36 ianw: and if someone wants it implement it 20:29:52 http://paste.openstack.org/raw/585654/ 20:29:57 is an example if what I'd like to do 20:30:24 pabelanger: what im thinking is basically rather than use a single var for playbook in your patch, loop over all playbooks in elements/*/ansible/finalise.d/* 20:30:25 which, runs ansible on host, against chroot, before DIB compresses and saves it 20:30:31 but otherwise do it exactly like your patch is 20:30:58 greghaynes: sure, if we can do that using ansible_connection=chroot, I'm happy with that 20:31:02 yep 20:31:08 kk 20:31:47 that sounds quite reasonable 20:31:47 there will be an asterisk of 'ansible finalise.d plays run after all normal finalise.d plays' but I think that should be fine for a first iteration 20:32:08 on 2.0 branch, we have nice vars for walking the element list and collecting those playbooks 20:32:53 also, i think ansible becomes a requirements.txt 20:32:54 Yea, probably a good idea to use that 20:33:00 hrm 20:33:04 and documentation :) 20:33:11 thats a good question 20:33:30 the problem I have with ansible in requirements is they dont support py3k 20:33:46 that's being worked on 20:33:58 Yea, but we work on py3k now 20:33:58 but it's certainly not something that will be available immediately 20:34:16 hmm 20:34:22 there is the pbr optional extras thing 20:34:32 Yea, I think thats how well need to do it 20:34:44 bifrost doesn't do ansible in requirements, we bootstrap it separately 20:34:46 that sounds good 20:34:53 okay, so I'm happy to write code for this, if people want to mentor me 20:35:14 pabelanger: our operators are standing by at 1-800-#openstack-dib 20:35:17 hehe 20:35:24 neat 20:35:40 Yea I can offer some tips although im pretty oversubscribed ATM 20:35:47 yeah, pabelanger i can point you in the right direction i think 20:35:51 awesome 20:35:53 neato 20:35:54 dib is also being looked at for verifying roles in galaxy fwiw 20:35:59 \O/ 20:36:20 Ya, if we can solve this issue, I think more people are lined up to start using DIB 20:36:49 Yep, its actually pretty straightforward I think 20:36:52 ok 20:37:00 #topic Bugs process 20:37:24 I dont have a ton here other than - I went and closed some more bugs and if anyone could help out closing some of the obviously super stale bugs we have itd be helpful 20:37:39 theres some that are pretty useful very old ideas 20:37:41 thanks for that 20:37:43 so ive been leaving them 20:38:03 yeah with those if we want to keep them i guess mark as wishlist 20:38:10 ++ 20:38:32 Theres a question about adding a bugs section to the meeting 20:38:34 thoughts? 20:39:01 probably more useful for talking about newly filed things 20:39:09 if we have nothing else on i guess ... 20:39:13 and/or ongoing important issues 20:39:31 Yea, triaging incoming bugs is something thatd be useful for 20:39:47 just so people dont feel like they are throwing bugs in to the abyss 20:39:55 that's a really good point 20:40:07 ok 20:40:22 #action greghaynes to add section for incoming bugs to meeting agendas 20:40:39 a bugs section would also be useful in case someone wanted to include one in agenda to bring up here 20:40:45 ah yea 20:41:14 maybe we should just stop writing bugs 20:41:27 #topic Dib 2.0 20:41:33 ok, the exciting bit 20:41:41 \o/ 20:41:45 ianw: you probably know the state of this best 20:42:05 #link https://review.openstack.org/#/q/project:openstack/diskimage-builder+branch:feature/v2 20:42:13 so i think tripleo is legitimately unhappy, i haven't got to the bottom of that 20:42:22 uhoh 20:42:41 you up for taking on poking them? 20:42:48 about v2? 20:42:54 Yea, their CI failing 20:42:57 yeah, it's on my todo list :) 20:42:58 ah right 20:43:16 #action ianw to work on figuring out why v2 is failing tripleo 20:43:20 it's not the most reliable CI, so i tend to turn a blind eye, but i think it reliably fails 20:43:32 gotcha 20:43:34 i've held off merging the huge move everything patch 20:43:47 i need to try the symlink stuff we talked about 20:43:56 +1 20:44:15 ok. I think the end of that convo was that it actually isnt a big deal to break compat there - folks shouldnt be setting our elements dir in their ELEMENTS_PATH anyhow 20:44:15 i have been out on pto since last time we talked about it so i'll claim that as some excuse :) 20:44:47 yep, the only thing my minimising my pain when i merge master into the branch 20:44:56 greghaynes: we should probably add that to docs explicitly because i've definitely seen people doing that anyway 20:45:05 i might do another master merge to v2 at some point soon, just to pull in latest stuff 20:45:09 cinerama: yesplz 20:45:20 ianw: ah ok 20:45:33 ianw: master merge sounds good 20:45:35 the one other thing, related to what cinerama just said - I want to go through and do a docs refresh before v2 20:45:47 since it seems like docs are one of the places people will likely look to figure out what is new 20:46:04 so, *maybe* at the next meeting we'd be able to do a -rc release 20:46:29 a lot of greghaynes deprecation stuff is in but just has to pass CI 20:46:36 ++, if we can get the moving elements and deprection removal stuff merged I'd be up for a -rc 20:46:38 it'd be good if we sit down and read the docs through with noob hats on (or even ask others to) to see what's missing 20:46:56 cinerama: Yea, thats basically what I started trying to do. There was a ton of low hanging bugs 20:47:07 * bkero still pretty noob, feel free to point me at docs 20:47:09 i should have more review bandwidth soonish 20:47:23 something i've wanted to figure out was how to have the list of elements put a summary line next to each one, maybe put them in a table 20:47:24 bkero: yesplz, http://docs.openstack.org/developer/diskimage-builder/ 20:47:32 oh the live docs, I thought you meant things in-review 20:47:35 i'm sure it's possible with enough sphinx magic 20:47:35 I can go through anyway 20:47:46 ianw: yea, I really need to find someone who is a sphinx guru 20:47:59 ianw: one other thing thatd be huge is if we could read element-deps in to the docs and make links for them 20:48:05 bkero: i think it would be good to go over it holistically, i know with doc updates it can be easy to go about things piecemeal and miss important stuff 20:48:20 greghaynes: +1 element doc links 20:48:25 greghaynes: i'm sure i have some hacked up scripts to make .dot files of elements 20:48:30 i did it for all the infra ones 20:48:34 oh nie 20:48:36 nice 20:48:36 ianw: cool! 20:48:45 cinerama: I haven't read them before, so I won't have any potentially deprecated knowledge of them :) 20:48:54 so yea, lets shoot for an -rc at around next meeting time? 20:49:08 yep, i think that's realistic 20:49:13 +1 20:49:17 #action review v2 changes and try to get last bits in for a first rc at next meeting 20:49:47 #topic open discussion 20:50:03 Anything folks want to bring up? 20:50:41 welp, I think we get about 10mins back 20:50:43 wooo 20:50:50 i'm really excited about the ansiblefest stuff that happened 20:50:53 oh? 20:50:56 i think it will be really good for the project 20:51:09 just sounds like we'll potentially have more users & involvement 20:51:10 I dont know what happened 20:51:13 oh awesome 20:51:19 yea if they are interested thatd be huge 20:51:32 I'm good 20:51:37 we specifically got brought up with regard to role verification for galaxy 20:51:50 cinerama: was there something written down somewhere I could check out? 20:51:56 I wouldnt mind chatting with folks about that 20:52:14 greghaynes: i don't think we had an etherpad on that one :( i do have some super sketchy stuff i wrote down that i can pass along 20:52:20 ah, ok 20:52:21 ty 20:52:30 allright, O/ everyone 20:52:32 ty for meeting 20:52:34 #endmeeting