Thursday, 2011-10-13

*** littleidea has quit IRC00:09
*** littleidea has joined #openstack-meeting00:10
*** jaypipes has quit IRC00:14
*** reed has quit IRC00:18
*** dwalleck has quit IRC00:27
*** dwalleck has joined #openstack-meeting00:32
*** donaldngo_hp has quit IRC00:33
*** dwalleck has quit IRC00:36
*** vladimir3p_ has quit IRC00:49
*** novas0x2a|laptop has quit IRC00:49
*** nati2 has quit IRC00:53
*** littleidea has quit IRC00:58
*** dwalleck has joined #openstack-meeting01:01
*** adjohn has quit IRC01:05
*** adjohn has joined #openstack-meeting01:07
*** nati2 has joined #openstack-meeting01:18
*** dragondm has quit IRC01:27
*** nati2 has quit IRC01:30
*** dwalleck has quit IRC01:31
*** dwalleck has joined #openstack-meeting01:38
*** hggdh has quit IRC01:49
*** hggdh has joined #openstack-meeting01:50
*** adjohn has quit IRC01:53
*** mattray has joined #openstack-meeting01:54
*** mattray has quit IRC01:55
*** adjohn has joined #openstack-meeting01:57
*** adjohn has quit IRC02:03
*** adjohn has joined #openstack-meeting02:05
*** bengrue has quit IRC02:06
*** HowardRoark has quit IRC02:08
*** shang has quit IRC02:10
*** adjohn has quit IRC02:17
*** med_out is now known as medberry02:18
*** dwalleck_ has joined #openstack-meeting02:37
*** dwalleck has quit IRC02:41
*** HowardRoark has joined #openstack-meeting02:43
*** martine has joined #openstack-meeting03:02
*** shang has joined #openstack-meeting03:13
*** tsuzuki_ has joined #openstack-meeting03:13
*** medberry is now known as med_out03:19
*** zns has joined #openstack-meeting03:22
*** dwalleck_ has quit IRC03:57
*** dwalleck has joined #openstack-meeting03:57
*** HowardRoark has quit IRC03:59
*** dwalleck_ has joined #openstack-meeting04:02
*** martine has quit IRC04:03
*** dwalleck_ has quit IRC04:04
*** troytoman-away is now known as troytoman04:04
*** dwalleck_ has joined #openstack-meeting04:04
*** dwalleck has quit IRC04:05
*** troytoman is now known as troytoman-away04:05
*** zns has quit IRC04:07
*** dwalleck has joined #openstack-meeting04:19
*** dwalleck_ has quit IRC04:23
*** dendrobates is now known as dendro-afk04:29
*** dendro-afk is now known as dendrobates04:32
*** dwalleck has quit IRC04:41
*** dwalleck has joined #openstack-meeting04:41
*** dwalleck_ has joined #openstack-meeting04:47
*** dwalleck has quit IRC04:49
*** adjohn has joined #openstack-meeting04:51
*** adjohn has quit IRC05:00
*** deshantm has quit IRC05:35
*** deshantm has joined #openstack-meeting05:39
*** adjohn has joined #openstack-meeting05:40
*** dwalleck_ has quit IRC05:43
*** dwalleck has joined #openstack-meeting05:44
*** adjohn has quit IRC06:07
*** dwalleck_ has joined #openstack-meeting06:38
*** dwalleck has quit IRC06:41
*** dwalleck has joined #openstack-meeting06:43
*** ranjang has joined #openstack-meeting09:05
*** darraghb has joined #openstack-meeting09:12
*** ranjang has quit IRC09:25
*** tsuzuki_ has quit IRC11:38
*** dwalleck has quit IRC11:47
*** dendrobates is now known as dendro-afk12:01
*** dendro-afk is now known as dendrobates12:04
*** tsuzuki_ has joined #openstack-meeting12:38
*** med_out is now known as medberry12:42
*** jaypipes has joined #openstack-meeting12:44
*** martine has joined #openstack-meeting12:49
*** blamar has joined #openstack-meeting12:54
*** jaypipes has quit IRC12:55
*** tsuzuki_ has quit IRC13:01
*** dendrobates is now known as dendro-afk13:04
*** mattray has joined #openstack-meeting13:10
*** mattray has joined #openstack-meeting13:10
*** joesavak has joined #openstack-meeting13:13
*** jsavak has joined #openstack-meeting13:20
*** jsavak has quit IRC13:22
*** joesavak has quit IRC13:23
*** dendro-afk is now known as dendrobates13:34
*** hggdh has quit IRC13:54
*** hggdh has joined #openstack-meeting14:02
*** notmyname has joined #openstack-meeting14:32
*** deshantm has quit IRC14:34
*** reed has joined #openstack-meeting14:39
*** deshantm has joined #openstack-meeting14:47
*** adjohn has joined #openstack-meeting14:53
*** rnirmal has joined #openstack-meeting14:57
*** HowardRoark has joined #openstack-meeting15:00
*** dragondm has joined #openstack-meeting15:04
*** dendrobates is now known as dendro-afk15:36
*** dendro-afk is now known as dendrobates15:39
*** dendrobates is now known as dendro-afk15:57
*** HowardRo_ has joined #openstack-meeting16:06
*** HowardRoark has quit IRC16:06
*** martine_ has joined #openstack-meeting16:09
*** martine has quit IRC16:09
*** jakedahn has joined #openstack-meeting16:24
*** creiht has joined #openstack-meeting16:24
*** anotherjesse has joined #openstack-meeting16:27
*** jdg has joined #openstack-meeting16:35
*** jaypipes has joined #openstack-meeting16:35
*** HowardRo_ has quit IRC16:46
*** jakedahn has quit IRC16:51
*** HowardRoark has joined #openstack-meeting16:54
*** dendro-afk is now known as dendrobates17:04
*** dendrobates is now known as dendro-afk17:07
*** novas0x2a|laptop has joined #openstack-meeting17:14
*** vladimir3p has joined #openstack-meeting17:22
*** ogelbukh has joined #openstack-meeting17:23
*** jakedahn has joined #openstack-meeting17:39
*** vladimir3p has quit IRC18:02
*** vladimir3p_ has joined #openstack-meeting18:04
*** jaypipes has quit IRC18:09
*** jaypipes has joined #openstack-meeting18:24
*** jk0 has joined #openstack-meeting18:26
*** jakedahn has quit IRC18:37
*** jakedahn has joined #openstack-meeting18:38
*** jakedahn has quit IRC18:42
*** darraghb has quit IRC18:46
*** jdg has quit IRC18:46
*** renuka has joined #openstack-meeting19:04
*** df_ has joined #openstack-meeting19:05
*** df_ is now known as df11119:05
*** df111 is now known as df119:05
*** jdg has joined #openstack-meeting19:27
*** jdg has left #openstack-meeting19:28
*** dwalleck has joined #openstack-meeting19:34
*** df1 has quit IRC19:35
*** jaypipes has quit IRC19:38
*** jaypipes has joined #openstack-meeting19:39
*** clayg has joined #openstack-meeting19:49
*** mirrorbox has joined #openstack-meeting20:01
renukahi Vish, could we start the nova volume meeting?20:01
vishyrenuka, can you try startmeeting?20:02
vishyi don't know if you have to be an admin20:02
vishyif it doesn't work, I will start it20:02
renukaI think I do... you'd probably have to do that20:02
vishytry typing #startmeeting20:02
vishylets see what happens :)20:02
openstackMeeting started Thu Oct 13 20:02:56 2011 UTC.  The chair is renuka. Information about MeetBot at
openstackUseful Commands: #action #agreed #help #info #idea #link #topic.20:02
vishyyay, I thought that might work :)20:03
vladimir3p_cool :-)20:03
renukaRight, I tried to change the topic before, but that didn't work20:03
vishyyou should be able to now with #topic20:03
vishyanyone can do #info #idea and #link20:03
vishynot sure about action20:04
*** jdg has joined #openstack-meeting20:04
renukaSo going by the agenda, we could let Vladimir start, since he has a concrete list of what he would like to bring up20:04
vishyrenuka is the agenda in the wiki?20:05
vladimir3p_ok, not really concrete20:05
vishyif not i will add it20:05
*** pandemicsyn has joined #openstack-meeting20:05
renukano I haven't added it... just in the email20:05
vladimir3p_so, starting with our points. some history:20:05
vladimir3p_we've implemented our own driver & scheduler that are using report_capabilities and reporting our special info20:06
vladimir3p_in general I suppose it will be great if we could either standardize this info or make it completely up to vendor20:06
claygvladimir3p_: sidebar regarding the VSA impl in trunk - can I test it w/o ZadaraDriver?20:06
vladimir3p_but on scheduler level it should recognize what type of info it is getting for volume20:06
vladimir3p_nope :-)20:07
vladimir3p_we are working on "generalizing" it20:07
claygother people could help if they could run it :-)20:07
claygmaybe you can give me a trial license ;)20:07
vladimir3p_yep, agree20:07
renukavladimir3p_:  are the capabilites for volumes or storage backends?20:07
*** hggdh has quit IRC20:08
*** dendro-afk is now known as dendrobates20:08
vladimir3p_for us - they are storage backend capab20:08
renukaand are they dynamic? could you give an example20:08
vladimir3p_there is a special package that we install on every node (where volume is running) and recognize what types of drives are there20:08
claygvladimir3p_: and also to confirm on caps - they are only aggregated in the scheduler not dropped into db?20:08
vladimir3p_yes, it is dynamically updated20:08
vladimir3p_(only dynamic in sched)20:09
vishy# link
vladimir3p_moreover, there is a small code for verification if they were changed or not (to avoid sending the same data over and over)20:09
claygvladimir3p_: I've read the blueprint a couple of times, and the impl deviated slightly because of some other depends - but in import ways... did docs ever get released?20:09
*** hggdh has joined #openstack-meeting20:10
vladimir3p_docs - still in progress. we have some outdated ones20:10
vladimir3p_anyway, I suppose VSA is not the main topic of this meeting :-)20:10
claygok, sorry to grill you :D  - zadara has made the most dramatic changes to volumes lately (no disrespect to vishy's latest fixes!)20:10
vladimir3p_we would be glad to generalize this stuff or at least to use some of our  concepts20:10
renukaSo (forgive my ignorance) what part of the capabilities is dynamic?20:11
*** df1 has joined #openstack-meeting20:11
vladimir3p_hmm... everything ...20:11
claygvladimir3p_: honestly I think generalization of what we already have is the most important chater of this group starting out20:11
vladimir3p_report capabilities goes to driver (periodic task) and checks what is there, what used/free/etc20:11
claygrenuka: just the reporting of capabilities... the scheduler just reacts to the constant stream of data being fed up from the drivers20:12
claygchange the storage backend and let it propogate up20:12
jdgI would push towards the generalization/abstraction as vladimir mentions rather than worry much about details of Zadara (although I'm very interested)20:12
renukamakes sense20:12
jdgThe question I have is what are you proposing in terms of the generalization?20:13
claygjdg: zadara is already all up in trunk, understanding what's there and how it falls short of greater needs seems relevant20:13
jdgFair enough20:13
claygjdg: what does soldfire need?20:13
vladimir3p_from the scheduler part - it is also very specific to Zadara today. The cheduler knows to correspond volume types to whatever reported by driver20:13
* clayg goes to look at scheduler20:14
renukavladimir3p_: could you please link the blueprint here20:14
jdgclayg: Very long list... :)  My main interest is getting subclasses written (my issue of course), and future possibilities of boot from vol20:14
claygjdg: the transport is ultimately iscsi, yes?20:15
vishyi would suggest a simple general scheduler20:15
claygdoesn't kvm already do boot from volume?20:15
vishyi would guess many storage vendors need to get specific and change it.20:15
* clayg has never tried it20:15
vishyclayg: yes it does20:15
claygis hp here?20:16
vishya general volume type scheduler that does something dumb like match volume type to a single reported capability would be great20:16
vishyand a more advanced one that does json matching and filtering as well like the ones on the compute side20:17
vladimir3p_vishy: yep, something like that. the question if we would like to have schedulers per volume type20:17
df1vishy: agreed on the basic, and advanced sounds nice too20:17
renukahow would this scheme react if we have restrictions about which hosts can access which storage?20:17
vishy#idea A very basic volume type scheduler that can redirect to different backends based on a single reported capability called "type"20:17
df1vladimir3p_: as in a meta-scheduler which chooses a scheduler based on the requested type and passes on to that?20:18
vishy#idea A more advanced scheduler that does json filtering similar to the advanced schedulers in compute20:18
vladimir3p_df1: yep20:18
vishyrenuka: i would think that we should have a simlar concept to host-aggregates for volumes20:18
df1renuka: That would fall under the advanced scheduler - pass in arguments to say something about which sorts of volumes are ok20:19
vishyrenuka: i think it could use the same set of apis as proposed by armando20:19
renukavishy: right... that's what I was getting to20:19
renukaso the scheduler would need the knowledge of host aggregates as well, then?20:19
vishyrenuka: you could implement it through capabilities, but it wouldn't be dynamically modifyable in that case20:19
df1vishy, renuka: missed amando's proposal - what was it?20:20
clayglink pls20:20
vladimir3p_renuka: it also means that during volume creation you need to know associated instance20:20
vishyrenuka: yes the scheduler will need access.  I would think that the easy version would be to add the host-aggregate metadata in the capability reporting20:20
vishyyou beat me20:20
claygvladimir3p_: it may be sufficient to know project_id depending on how you're scheduling instances20:20
claygvishy, renuka: thnx for link20:21
renukavladimir3p_: good point... I am expecting this to be used most for things like boot from volume20:22
renukaso I assumed there would be more control over where the volume got created when it did20:22
vladimir3p_I suppose that aggregates is a very important part, but more like an "advanced" add-on20:23
vishy#idea report host agreggate metadata through capabilities to the scheduler so that we don't have to do separate db access in the scheduler.20:23
renukavishy: that means information like which backend can be accessed by which host-aggregate?20:24
vladimir3p_I'm trying to understand the entire flow ... when volume will be created, should aggregates/host relations be stored in volume types extra specs?20:24
df1vishy: how does this work out with a very large number of host agreggates?20:24
vishy#topic capability reporting20:24
vishyrenuka can you change the topic?  Only the chair can do it20:25
renukanow at this point, if we do not have a heirarchy of some sort, and there happens to be storage reachable from a large number of hosts but not all within that zone, it gets a little tricky via capabilities20:25
vishydf1: each host only reports the host aggregate metadata that it is a part of20:25
renuka#topic capability reporting20:25
*** openstack changes topic to "capability reporting"20:25
renukawhen did I become chair :P20:25
vishywhen you typed #startmeeting20:25
vishy#info the previous conversation was also about capability reporting, and the need for host aggregates to play a part20:26
claygso is capability reporting _mainly_ reporting the volume_types that are available?20:26
vladimir3p_I suppose it should be also total quantity / occupied / etc20:27
vladimir3p_(per each volume type)20:27
renukaand which hosts it can access, per vish's suggestion20:27
vladimir3p_the reporting itself we could leave as is20:27
vladimir3p_renuka: which host - it will be up to driver20:27
claygso you know all the storage node endpoints, what volume types they can create and how much space/how many volumes they have?20:27
vladimir3p_I mean every driver could add whatever additional info20:27
claygrenuka: will the storage node _know_ which hosts it can "reach"20:28
vladimir3p_clayg: yes20:28
vishyclayg: the capability reporting is meant to be general, so it depends on the scheduler.  Generally the idea is for complicated scheduling you might report all sorts of capabilities20:28
renukaok so there is some way for the driver to specify implementation specific/connection  details20:28
vishyclayg: but i think most use cases can be solved by just reporting the types that the driver supports20:28
claygvladimir3p_: but unless every driver is going to run their own scheduler (totally an option) they'll want to agree on what goes where and what it means.20:28
vishylets get the simple version working first20:29
claygrenuka: the driver can have it's own flags, or get config from wherever I suppose, but it wouldn't need/want to report that to the scheudler?20:29
renukavishy, valdimir3p_: is there a one-many relation between types and storage backends?20:29
vishythere isn't an explicit relationship20:30
vladimir3p_clayg: that's where we will do "generalization": we will report {volume_type, quantity_total, quantity_used, other_params={}}20:30
claygis quantity megs, gigs, # of volumes?20:30
vishyvolume_type doesn't have any default definitions currently20:30
vladimir3p_some abstract numbers20:30
vladimir3p_depends on volume type20:31
claygvladimir3p_: yeah gotcha, agreed between type - makes sense20:31
vishyrenuka: volume_driver_type (which is how compute knows how to connect currently has: 'iscsi', 'local', 'rbd', 'sheepdog'20:31
vladimir3p_the scheduler will be only able to perform some comparisons20:31
vishyand currently multiple backends export 'iscsi'20:31
claygvishy: but in the context of scheduling type is more granular - the host may connect via iscsi to "gold" and "silver" volume_types20:32
renukayes, my point was, more than 1 backend can be associated with a type as abstract as gold20:32
renukabecause if gold means netapp backends... we could still have more than one of those, correct?20:33
claygrenuka: well... the backend may just be the driver/message_bus - and then that driver can speak to multiple "nodes" ?  (one idea)20:33
*** dwalleck has quit IRC20:33
vishyclayg, renuka: +120:33
claygrenuka: in the sm impl, where does nova-volume run?20:34
renukait is meant to be a control plane. so on a node of its own20:34
vishy#idea lets define 3 simple types: bronze, silver, gold and make the existing drivers just export all three types.  This will allow us to write a scheduler that can differentiate the three20:34
claygyeah... I guess I really mean like how manY/20:34
renukaby node i mean xenserver host20:34
vishyclayg: depends on the backend20:35
vishyclayg: in the simple lvm/iscsi you have X volume hosts and one runs on every host20:35
jdgvishy:  Can you expand on your bronze, silver, gold types?20:35
vishyclayg: in HPSan you run one that just talks to the san20:35
vladimir3p_renuka: how about to report list of volume types, where details like gold/silver connection, etc might be hidden in additional params. The dimple scheduler should be able to "consolidate" all together, but more granular schedulers could really understand if gold or silver should be used20:35
claygvishy: but in renuka's sm branch the the xenapi kinda abstracts the different backends20:35
renukaso in SM, the volume driver instances have equal capabilities at this point... we expect the requests to be distributed accross them... and then can all see all of the storage20:35
vishyjdg: they are just names so that we can prove that we support different tiers of storage20:36
claygrenuka: but they still have to figure out placement as far as reaching the node?  Or the scheduler already did that for them?20:36
vishyjdg: and that the scheduling and driver backends use them.20:36
jdgvishy: thanks20:37
renukaclayg: at this point, the scheduler is a first fit...20:37
claygvladimir3p_, renuka: I really don't understand when why a report_capabilities would ever send up "connection"20:37
renukaso it treats all the storage equally for now20:37
renukaclayg: connection here means more like access control info20:38
renukamaybe that was a bad word again... more like which hosts can reach this storage20:38
claygthat did not disambiguate the concept for me :P20:38
claygwhat is "access control info"20:38
vladimir3p_clayg: I suppose I understand Renuka's use case - you may have different storage controllers connected to different nodes (Active Optimized/Non-optimized, etc.)20:38
vladimir3p_the access from "preferred" controller might be more preferable20:39
renukaLooking at the time, do we want to continue this discussion or touch more of the topics (i vote to continue)20:40
vishyrenuka: do you have a solution in mind when there is no knowledge about which vm the storage will be attached to?20:40
renukavishy: for a scheduler you mean?20:41
renukaor which storage backend will be used when there is no knowledge?20:41
vishythe second20:42
*** masumotok has joined #openstack-meeting20:42
vishyfor example, creating a volume through the ec2 api gives you no knowledge of where it will be attached20:42
renukain that case, the assumption is that reachability is not an issue, correct? So I agree with Vladimir's capability based decision20:42
vladimir3p_vishy: for EC2 - you could have default type20:43
renukahaving said that, I would expect some knowledge of the project/user to be used20:43
vishytype is not the concern20:43
vladimir3p_so, do we have an agreement re reporting capabilities? It will report the list of ... {volume type id, quantities (all, used), other info}20:43
vishythe concern is for top-of-rack storage20:44
vishyyou have to put it somewhere20:44
claygrenuka: when you say "storage backend" is that a type of backend or a specific node/instances/pool that can create volumes?20:44
vladimir3p_in other_info we could put some sort of UUID for storage array ...20:44
renukaclayg: it could be anything ranging from local storage on the volume node to netapp backends20:44
vladimir3p_we have only 15 min left ... prior to Vish's meeting20:45
vishydo we have specific blueprints for these features?20:45
vishyI would suggest blueprints for the first scheduler with simple capability reporting20:46
vishyat the very least20:46
vishyand someone taking lead on getting it implemented20:46
renukayes, it would be a really good idea to have as many details of the implementation written down, so we have something more concrete20:46
vladimir3p_jsut to compare if storage is there?20:46
*** comstud has joined #openstack-meeting20:47
renukavladimir3p_: would it be possible for you to put the design details into a scheduler blueprint?20:48
vladimir3p_yeah, I could create something small ... sorry, on this stage cant commit for full implementation20:48
renukasure... we should just have a point where we have a concrete design20:49
vladimir3p_yep, I will write down what I think about simple scheduler & simple reporting20:49
claygblue print could link to a etherpad we could flesh it out more async20:49
renukamakes sense20:50
claygupdate blueprint when we're happy (same time next week?)20:50
vladimir3p_we could review it and decide if it is good for everyone as a first step20:50
df1sounds good20:50
renukawe should discuss the time ... turns out netapp and hp couldn't make it to this one20:50
jdgI vote for same time next week20:50
vladimir3p_fine with me20:51
*** bcwaldon has joined #openstack-meeting20:51
renukacan anyone make it earlier on any day? It is 8 pm UTC which might be late for some20:51
vladimir3p_I will create an etherpad for this design .. or should we put it on wiki?20:51
vladimir3p_how about 10am PST?20:51
renukavladimir3p_: yes20:51
vladimir3p_so, next Thu @ 10am. Renuka, will you send an update to everyone on ML?20:53
claygvladimir3p_: 10am PST is good20:53
vishy#action vladimir3p_ to document plans for simple reporting and scheduler20:53
jdg10am works20:53
df110am is good20:53
renukathat works too20:53
*** PhilDay has joined #openstack-meeting20:53
vladimir3p_re document location: wiki or etherpad?20:53
vladimir3p_do we have a blueprint for it?20:53
vishy#info next meeting moved to 10 am20:53
vishyvladimir3p_: pls make a blueprint20:54
renukaetherpad is good for hashing out design20:54
vishyyou can link to either imo20:54
vladimir3p_ok, will do20:54
vishy#info the rest of the agenda will be discussed next week20:56
renukavladimir3p_: what did you mean by -Volume-type aware drivers20:56
vishyrenuka: if we're done could you issue an #endmeeting20:56
*** jaypipes has quit IRC20:56
vladimir3p_renuka: to be able to have multiple drivers on the same node responsible for different volume types20:56
*** openstack changes topic to "Openstack Meetings: | Minutes:"20:57
openstackMeeting ended Thu Oct 13 20:57:00 2011 UTC.  Information about MeetBot at . (v 0.1.4)20:57
openstackMinutes (text):
vishyeveryone look at the minutes:20:57
vishythey are much more useful if everyone is contributing to with #info #link #action #idea20:58
*** mikeyp-3_ has joined #openstack-meeting20:59
vishyok here we go21:00
openstackMeeting started Thu Oct 13 21:00:36 2011 UTC.  The chair is vishy. Information about MeetBot at
openstackUseful Commands: #action #agreed #help #info #idea #link #topic.21:00
vishy#topic Blueprint list21:00
*** openstack changes topic to "Blueprint list"21:00
vishyi've gotten most of the blueprints added there21:01
vishyplease everyone look and see if there is anything I missed21:01
*** rjh has joined #openstack-meeting21:02
pvolooks good Vish.21:02
pvoI think there is a lot of work here for Essex21:02
comstudi probably should create a XenServer ConfigDrive BP21:03
vishycomstud: pls do21:03
comstudno feature parity there yet21:03
bcwaldonvishy: do you want to assign any of these today?21:03
vishynote that the admin apis has nothing yet21:03
vishybcwaldon: first I want to make sure that we haven't missed any major ones21:04
PhilDay#info you missed the two blueprints I created yesterdat for VM State Machine and Image Cache Management21:04
vishyPhilDay: can you cpy them in pls21:04
comstud#action comstud to create xenserver config drive BP21:04
pvocomstud: I think there is an old one for that21:05
comstudmight have been marked done with the libvirt part21:05
comstudI'll take a look21:05
*** mikeyp-3_ has quit IRC21:05
comstudsays implemented21:06
*** dwalleck has joined #openstack-meeting21:06
pvocomstud: gotcha21:06
comstudi'm supposed to create the story for our sprints anyway :)21:06
*** dwalleck has quit IRC21:06
comstudi'm slacking21:06
vishycan someone add a blueprint for vnc console cleanup? Basically just combining the two vnc paths somehow21:07
comstuddid the research, haven't written it yet21:07
_0x44It was implemented with the understanding/expectation that someone with access to XenServer who cared about config-drive would ensure that it worked.21:07
_0x44I think the only thing needed is to attach it21:07
pvo_0x44: I think there was a bit more to do. Create the VDIs and such.21:07
comstudthere's more than that21:07
comstudbut yeah21:08
sleepsonthefloorvishy - i can add one for vnc21:09
vishysleepsonthefloor: thx21:09
*** jakedahn has joined #openstack-meeting21:09
PhilDayvishy - added the VM state machine under ""Workflow State Machine" but not sure if that's quite the right place for it21:09
vishyPhilDay: that is fine21:10
vishyso we are still missing a few blueprints related to volumes21:10
vishyother than that i think we have everything in21:10
vishywell we are missing a few in the security session as well21:12
*** mikeyp-3 has joined #openstack-meeting21:13
_0x44vishy: Are the BP's bengrue worked on in Diablo already listed?21:13
pvovishy: are you going to prioritize these from your perspective as PTL?21:14
vishywhich are those?21:14
vishypvo: that is what I want input on21:14
*** salv-orlando has joined #openstack-meeting21:14
vishypvo: I need to know which ones are required by teams21:14
_0x44vishy: Bringing bengrue in here to answer that.21:14
*** df1 has left #openstack-meeting21:14
vishythere are definitely some missing21:15
*** bengrue has joined #openstack-meeting21:15
vishyis westmaas here?21:15
_0x44bengrue: What are the blueprints you were working on in Diablo around volumes?21:15
pvoI was just chatting with him.21:16
vishyso we have a few very big ones21:16
vishyand we need to decide if we want to tackle them in diablo21:16
vishy* essex21:16
bengrueSec.  That was a bit ago.21:16
vishyor push to f21:16
pvovishy: what do you think are the biggest?21:16
pvotop 521:16
vishyso the big ones as I see it are:21:17
vishy1) DB Cleanup and testing21:17
comstudDo we have an overall goal for Essex?21:17
vishy2) Zone Aggregation21:17
pvostablility, performance and scalability21:17
vishy3) Workflow State Machine21:17
comstudpvo: perfect21:17
vishy4) Secure Databases21:17
bhall_pvo: is that all? :)21:17
pvobhall_: over features, I would think so21:18
pvoI would prefer those 3, over any new features21:18
vladimir3p_vishy: under Workflow state machine do you mean "orchestration, jobs, retries, etc..."21:18
vishythe problem is i think all of those are required21:18
PhilDay#agree with pvo21:18
vishyvladimir3p_: I'm referring to sandy's proposal of actually having workflow management21:19
vladimir3p_vishy: yep21:19
vishyI don't thing we can hit all 4 of those21:19
vishyand they are listed in priority as i see them21:19
vishy#topic large blueprints21:19
*** openstack changes topic to "large blueprints"21:19
vishy#info Largest blueprints: db cleanup and testing, zone aggregation, workflow state machine, secure databases21:20
pvovishy: we have 4 milestones. One goal per milestone?21:20
vishysome of them can be worked concurrently21:20
vishybut I think all 4 of those require a team21:20
vishyi don't think one person can handle any of them21:21
PhilDaypvo:  I'd be worried that we'd still have big changes comming through late in Essex if we do one per milestone21:21
vishyI'd like to push secure db / queue21:21
vishy#info upgrades is also large21:21
bengrue_0x44: the blueprint I was working on was
vishybengure: that is implemented and merged21:21
vishy* bengrue21:21
_0x44I wasn't sure if it was merged21:22
bengrueYeah, Vlad I think drove it home?21:22
PhilDayUpgrades is a v important now there are systems deployed21:22
_0x44Ok, sorry for the confusion.21:22
vishyYes we need a team for all 5 of those, but if we don't hit secure dbs in essex does that kill anyone21:22
vishyPhilDay: can we push that one from your perspective?21:22
anotherjessevishy: secure db + zones might need to coordinate21:23
vishyPhilDay: That one in particular requires massive changes21:23
vishyanotherjesse: I'd like to tackle db cleanup + zones first, i think it makes secure db easier21:23
vishyworking on it concurrently will be a huge issue21:24
anotherjessevishy: also we might want to think of using a smaller component like the volume component for testing out the secure db21:24
pvovishy: ++21:24
PhilDayAgreed, and the session in Boston showed there are more questions than answers - but if we don't hit it soon it'll just get worse21:24
vishy#idea create subteams for the major work sections21:24
anotherjessevishy: teams mean launchpad teams?21:24
vishyanotherjesse: I don't know if it needs to be that formal21:25
pvoI think its just working groups that Vish keeps tabs on21:25
vishyzone aggregation, i'd like to see anotherjesse, comstud, sandywalsh collaborate21:25
vishydon't know if there is anyone else who should be involved there21:26
comstudyeah, myself and sandy are probably a given21:26
PhilDayWe have some views, although we're not heavy into zones - yet21:26
anotherjessevishy: wiki page of major themes for essex with people on it (so people who aren't on it originally can join in)21:27
vishydoes anyone want to tackle the db cleanup and testing stuff21:27
vishyanotherjesse: definitely21:27
bcwaldonvishy: blamar would be a good one there, I'm also interested21:27
*** primeministerp1 has joined #openstack-meeting21:27
comstudbcwaldon: funny, I was just going to nominate both of you21:27
vishyhow about upgrades?21:27
anotherjessesleepsonthefloor: do you want to help on db, statemachine or ?21:28
blamarbcwaldon: thanks there buddy21:28
vishythat one is a lot of theory and prototyping21:28
bcwaldonblamar: there you are!21:28
comstudblamar: np :)21:28
_0x44vishy: Am interested in db=-cleanup at piston21:28
rjhonce again, we need to have a group to define what upgrades are all about21:28
vishyI'm adding this to the etherpad21:28
rjhWe are definitely interested21:28
vishyfeel free to add yourself to a section21:28
vishyrjh: who are you with again?21:29
pvoI'm going to throw Dietz on the upgrades. He and I have been talking about it and whiteboarding21:29
anotherjesse_0x44: are you guys interested in building a db backend that isn't sqlalchemy?21:29
_0x44anotherjesse: Yes.21:29
anotherjesse_0x44: zk?21:29
_0x44anotherjesse: Likely21:29
PhilDayrjh=Ray Hookway (HP)21:30
*** NeilJohnston has joined #openstack-meeting21:30
anotherjesse_0x44: I think it is important that whoever leads the db cleanup does in order to be able to ship an alternative backend21:30
bcwaldonthanks PhilDay, was wondering ;)21:30
PhilDayAt least I think its Ray :-)21:30
rjhIt is21:31
jk0if not, he was just volunteered ;)21:31
_0x44pvo/vishy: I'd like to add novas0x2a to upgrades also21:31
vishyoh hai21:31
vishy#topic subteams for major sections21:31
*** openstack changes topic to "subteams for major sections"21:31
_0x44Sorry, am on a terrible network right now21:31
pvo_0x44: something didn't convert there21:31
comstud0x44: you still in boston?21:32
salv-orlandoCitrix would be more than happy to contribute on upgrades as well. I think the name is johngarbutt but I'm not totally sure21:32
_0x44pvo: novas0x2a is mike lundy21:32
vishy#info breaking out subteams for particular sections of the code21:32
_0x44pvo: The nordic bearded guy at Pisotn21:32
vishyok we have one more subteam21:32
vishyfeature parity21:32
anotherjessevishy - you mean hypervisors?21:32
_0x44I have to run now21:32
anotherjesseor apis?21:32
vishyanotherjesse: hypervisors21:32
pvoall hypervisors?21:33
pvowe focused just on KVM and XenServer at the summit21:33
pvois that still the majority focus?21:33
anotherjessepvo: probably some things will result in documenting "not supported" - for things that don't make sense21:33
vishyno just the main two21:33
vishyi don't want to get crazy21:33
pvojust wanted to make sure21:33
vishyso quick once over from the interested parties21:33
anotherjessevishy: does this include the fact that xenserver at rax vs. kvm openstack is very different from user perspective (image formats, userdata/metdata, how to auth to servers, ...)21:34
vishyanotherjesse: it doesn't21:34
rjhHow about a security team to deal with message signing and db access?21:34
vishyanotherjesse: we have no blueprints for those things, but they sound important21:34
vishyrjh: yes security team is good.  I'm concerned about them being able to make much progress21:35
anotherjesse#info jesse write up a blueprint on ux for openstack21:35
*** Gordonz has quit IRC21:35
vishybut they can at least start planning21:35
vishypvo, is there someone on your team who is more focused on feature parity?21:35
anotherjessevishy: I volunteer pvo and sleepsonthefloor21:36
pvomore focused?21:36
anotherjesseit would be nice if someone from ubuntu (kvm/lxc) & citrix (xs) could help21:36
vishy#info db and rabbit security should be planned but the aren't targeted to be complete by essex21:37
sleepsontheflooryeah, I'm definitely interested in parity21:37
salv-orlandomore or less the whole Citrix team is focused on parity21:37
vishyanotherjesse: agreed, I wonder if we could add salv-orlando21:37
*** NeilJohnston has quit IRC21:37
*** bcwaldon has quit IRC21:37
salv-orlandoI'd say ewanmellor would be happy to be on this team as well21:37
sleepsonthefloorit would be nice if someone else who has thought about nova-network and security groups with xen could be on it21:38
vishyso does anything else seem to dangerous to be in scope for essex?21:38
anotherjesseapi 2.0 - but before zones is done … that seems scary to talk about21:39
pvoanotherjesse: agree21:39
vishybcwaldon disappeared21:39
vishyI don't think we need 2.0 for essex21:39
pvo#info wait on zones to be completed before nova api 2.021:39
anotherjessevishy: rbac is medium sized - mostly busy work21:39
vishyanotherjesse: good point we need to give special attention to that one21:40
anotherjesse(arguing over what the proper verbs are)21:40
vishyanotherjesse: do you think we should have a team for that.  Should rcb take lead on it?21:40
anotherjessemaybe todd?21:40
anotherjessehe needs that for nasa21:40
vishyi don't mind heading that one up21:40
anotherjessevishy: having you on volumes would probably be better21:41
pvovishy: re: the top5, are there BPs for each of those?21:41
anotherjessesince you have done a lot of that work already21:41
vishyanotherjesse: I'm trying to offload volumes onto third parties21:41
vishyanotherjesse: so hopefully i can move into more of a supporting role there21:41
rjhThere is a blueprint for upgrades, but it needs to be made more specific21:42
vishyrjh: i think that one needs to be broken out into specific blueprints21:43
rjhSecure databases is mentioned in the nova-security-updates blueprint21:43
*** jakedahn has quit IRC21:43
vishy#action upgrades team to meet and come up with specific action items and blueprints21:43
rjhI agree - that goes for both of them21:43
vishy#info volumes team met today and started discussing the needed pieces21:44
vishy#info we don't have specific blueprints for the parts yet21:44
vishyblamar: do you want to lead the db cleanup team?21:45
*** mikeyp-3 has quit IRC21:45
anotherjessevishy: nothing against blamar but I really think having the lead caring about an alternative backend is important21:45
anotherjessecaring = require21:45
vishyblamar: there is one blueprint so far, which is to make the db layer return dictionaries21:45
vishyanotherjesse: so you think 0x44 should head it up?21:45
blamarvishy: apologies, was on the phone..however not currently interested in leading a team; busy life21:46
vishydoes anyone here care about zookeeper?21:46
anotherjessevishy: supposedly _0x44 does21:46
anotherjessecan we pencil it in - with a note that we need to verify21:46
vishy#action vishy to verify with _0x44 that he can lead the db cleanup efforts21:46
blamarvishy, anotherjesse: I'm an advocate for getting a couchdb backend (as per my email to the mailing list a while back on this subject) and I've used ZK in the past21:47
vishyanotherjesse: are you heading up the zone scaling coordination?21:47
blamarso I'd love to assist whomever leads21:47
vishyblamar, so you're volunteering again :)21:47
blamarnegatory, I'll be a fine assistant21:47
anotherjessevishy: i am from our team - if sandywalsh or comstud is a better lead overall, I have no objections21:47
pvovishy: comstud/sandy can as well.21:48
*** mikeyp-3 has joined #openstack-meeting21:48
vishycomstud: ?21:48
pvoBoth have a pretty firm grasp of the issue21:48
comstudnot sure i'm the best person21:48
vishywe also have that big nasty Recovery/Workflow section21:48
comstudesp. if i'll be tied up with zones21:48
vishywhich sandy seems to be very excited about21:48
anotherjessecomstud: how about you - since you guys have to have zones to ship?21:48
sandywalshI'm up for zones or orchestration ... two hot topics for me21:48
comstudare we talking about zones?21:48
pvocomstud: we're talking about zones. : )21:49
vishycomstud: zone scaling is what we're talking about21:49
comstudi was chatting with sandy and not paying attention21:49
vishy#info comstud to lead zones with help from sandy and anotherjesse21:49
comstudI can take that21:49
vishy#info sandy to lead orchestration with help from ...21:49
pvovishy: where is donabe in relation to orchestration?21:49
sandywalshI assumed donabe more operations oriented, not nova internal? no?21:50
vishypvo, sandywalsh: It is more related to creating groups of resources i think21:50
pvoI would think there would be some of the same retrying/requeueing in tehre as well21:50
vishywhere as this is more concerned with the state of individual resources21:50
*** creiht has left #openstack-meeting21:51
vishydoes anyone here care particularly about Operational Support21:51
sandywalshresources and the workflows around them21:51
vishyincluding admin apis, cli tools, etc?21:51
pvovishy: we do... I'm repping for our ops guys today21:52
anotherjessevishy: we should volunteer chmouel or others from deployment team21:52
vishysleepsonthefloor: are you able to lead the feature parity section?21:52
*** NeilJohnston has joined #openstack-meeting21:52
sleepsonthefloorvishy - for sure21:52
sleepsonthefloorsalv-orlando - you interested in helping  then?21:53
vishyok, so i need someone to take the lead on Upgrades, Operations, Security21:53
salv-orlandosleepsonthefloor: on feature parity? Yes.21:53
pvovishy: let me get back to you on the ops part.21:53
vishypvo: ok21:54
pvoI'm recruiting some of our ops guys to take a more active role in the open.21:54
vishypvo: are you leading upgrades?21:54
pvome or Dietz.21:54
vishyrjh: shall I put you on the lead for security planning?21:54
anotherjessevishy: I'm not sure if pvo is a good fit - as they will run an internal version - upgrades is more about shipped software?21:54
vishyanotherjesse: good point21:55
anotherjessevishy: someone in canonical or citrix or others who ship software based on milestones / releases21:55
pvoits more how to do upgrades is what were thinking. procedures and best practices.21:55
vishyanotherjesse: perhaps someone from canonical will step in here21:55
pvoyou're meaning make sure upgrades work from Diablo to pre-Essex?21:55
anotherjessediablo to essex21:55
vishypvo: it is diablo to essex21:55
pvoessex-pre release... sure. Yes, probably not the right guy there21:56
salv-orlandovishy: we at Citrix have definitely interest in upgrades. Unfortunately our 'upgrade' men are not here tonight21:56
pvowe won't be doing massive upgrades every 6 months21:56
anotherjessepvo: right - not that you won't be able to contribute and have lots of good feedback - but when you run a service you can do 100 upgrades between diablo and essex in stages (gradual db migrations, ...)21:56
vishyguys, I need some suggestions for how these groups should keep in touch21:56
*** rnirmal has quit IRC21:56
pvoanotherjesse: yep. Totally valid point21:56
pvowalkie talkies21:57
anotherjessepvo: you should definitely have folks on the team21:57
vishyshould we start little mailing lists for all of them like we did for the volume group?21:57
pvoanotherjesse: absolutely will21:57
anotherjessevishy: does lp teams give you a mailing list?21:57
pvoanotherjesse: I believe it does21:57
vishyanotherjesse: yeah21:57
vishythat is how we did it for voluems21:57
anotherjesseteams with open enrollment (or somethign else?) seems valuable21:58
vishyanotherjesse: ok21:58
vishy#action vishy to create subteams for mailing list communication21:58
anotherjessevishy: melange integration into nova - ?  is that an essex thing?21:58
vishy#info subteams will be open enrollment21:59
vishy#info each team will have a poc so that vishy has someone to bug21:59
vishy#info subteams are responsible for creating blueprints for needed work and helping vishy target them to specific milestones22:00
vishy#action vishy to go through the registered blueprints and accept them and assign them to the subteam.22:01
*** medberry is now known as med_out22:01
vishy#info subteam leader can reassign the blueprints to specific individuals or real teams to do the actual work22:01
vishyOk guys, I don't want to make this too heavyweight22:01
vishyBut there are way too many people working on the code for me to keep tabs on all of this at once, so I appreciate the help22:02
vishyI will create open launchpad groups for all of these teams so that they can communicate22:03
vishyi think there is already an operations team, so I might just use that one.  Although I specifically want dev help for tool and admin api createion22:03
vishyAny other subteam that we're missing?22:03
salv-orlandoI think there is alread a melange team22:04
salv-orlandoah ok22:04
vishy#topic What do we do with melange22:05
*** openstack changes topic to "What do we do with melange"22:05
vishylets take a minute and plan that22:05
vishyany input on melange?22:06
anotherjesseright now nova-network doesn't use it - if we integrate it then we should update nova-network?22:06
vishyanotherjesse: perhaps22:06
jk0I think it has a long ways to go before it can merge into trunk22:07
vishyanotherjesse: why not just keep it separate22:07
salv-orlandoin QuantumManager we allow users to either use traditional nova IPAM or Melange22:07
vishyanotherjesse: if the merge was an easy refactor inside was better22:07
salv-orlandowe could do the same in Flat & VlanManager22:07
jk0+1 on keeping it seprate22:07
vishyanotherjesse: just let melange support quantum22:07
*** martine_ has quit IRC22:07
anotherjessehas anyone investigated the level of effort to make nova-net use melange?22:07
bhall_vishy: +122:07
vishyanotherjesse: no, but I'm worried about the two integration points22:08
vishyquantum is already integrating, so we save work by just using that integration22:08
salv-orlandovishy: +122:09
anotherjessesalv-orlando: my goal would be to kill the nova-ipam to make things simplier - instead of making both quantum and nova-net use both22:09
anotherjessebut I'm good with waiting - just want to make sure22:09
vishyare any of the key melange devs here?22:09
vishyanotherjesse: does quantum use both?22:10
jk0they're mostly/all in India22:10
salv-orlandoI know only Troy and Rajaram22:10
bhall_vishy: it can, currently22:10
bhall_(by quantum I'm asuming you mean quantummanager)22:10
salv-orlandojust to be precise, quantummanager is nova-net manager for quantum22:11
vishysalv-orlando: so there is no integration in quantum itself?22:11
*** HowardRoark has quit IRC22:11
bhall_vishy: that's correct22:11
salv-orlandovishy: no, we actually do it in nova-network!22:12
vishyanotherjesse: in that case it sounds like the integration is fairly easy22:12
salv-orlandoI don't see it as a major hurdle. bhall_ might have more details22:12
bhall_yeah, I don't think the integration would be that bad22:12
vishybhall_: but you think it should be separate?22:13
anotherjessevishy: it is a 10k line diff - just worry about it living in the edge too long and not getting attention22:13
vishyanotherjesse: does it need attention?22:13
salv-orlando(integrating, or replacing nova's IPAM as anotherjesse suggested)22:13
anotherjessevishy: given that we haven't looked at it outside of the quantum use case … maybe?22:13
vishyanotherjesse: someone is going to have to spike this it sounds like22:14
anotherjessei'm suggesting a spike - ya22:14
*** zykes- has quit IRC22:14
pvowe were just talking about this today22:14
pvoI think they're going to try to figureo ut how to break it apart22:14
pvounknown if it can22:14
vishyok who can spike it?22:15
pvoI think this is Trey or Koelker22:15
vishytr3buchet might be a good choice22:15
pvosomeone on my team22:15
vishypvo can you have the two of them look at it and come up with a plan22:15
vishyconsidering a) our existing ip management sucks, what is the best path forward22:15
pvo#action will coordinate a plan to get the network diff broken up22:16
vishypvo thx22:16
vishyok anything else22:16
anotherjessepvo: I feel that if we don't move to melange we will end up implementing most of the api in an os-extension22:16
vishy#topic open discussion22:16
*** openstack changes topic to "open discussion"22:16
*** mattray has quit IRC22:16
comstudI've been cracking down on HACKING violations when reviewing22:18
comstudI dunno if this is the case or not... but it seems maybe there hasn't been a lot of attention paid to them22:18
vishyhopefully these teams will help with the reviews22:18
comstudSome of the stuff that annoys me in the code is general inconsistency22:18
anotherjessecomstud: be strict but helpful?22:19
comstudso I'm feeling like we could crack down a little more on that22:19
anotherjesseI kinda want a termie bot again - for the simple stuff22:19
vishythere are a lot of cleanup related tasks needed that don't have blueprints22:19
jk0there's a lot of stuff that isn't covered in HACKING22:19
salv-orlandotermie bot?22:19
vishy#idea jesse suggested dedicated a couple of milestones to stabilization and cleanup22:20
salv-orlandoSo it wasn't the real termie putting needs fixing for a missing linefeed on a docstring in my branch?22:20
tr3buchetwe could probably dedicate a release to it22:20
comstudI hate making people fix little stuff like this.. for fear I'll come off like an asshole. :)  But..22:22
comstudThat's the main complaint I hear when someone new goes to look at the code22:22
comstudA number of style differences... and then people not being sure which way they should do something22:22
pvocomstud: +++22:22
comstudHACKING is pretty simple to follow.. there's clear instructions there.22:22
vishymaybe we need to add a link to hacking to how to contribute22:23
vishyand the gerrit instructions22:23
sandywalshHACKING is good to enforce, sadly "pythonic" is much harder22:23
jk0we can't really enforce something that isn't in HACKING. should we consider adding more to it?22:23
comstudsandywalsh: Yeah, that's the thing.22:23
vishybefore typing git review: check HACKING, run tests, etc.22:23
salv-orlandoor enforce something like the termiebot as we enforce pep822:24
anotherjesseanyone not think it is reasonable to mark a patch "plz don't merge - needs fixing" if it has consistency issues?22:24
vishythat is the point of code review, right?22:24
comstudone point of it22:24
pvoI think the question is which consistency22:24
pvowhose consistency22:24
comstudbut 'consistency' right now doesn't mean a lot22:24
tr3buchetwhat else could be the point22:24
jk0that's why I propose adding more things to HACKING22:25
comstudtr3buchet: obvious bugs that tests don't catch22:25
anotherjessesalv-orlando: termie bot was something justin-sb created to catch issues termie consistantly flagged22:25
tr3buchetright right22:25
vishycan someone add a set of instructions on running unit tests and code guidelines to the wiki22:25
jk0indentations, single vs double quote usage, etc22:25
tr3buchetconsistency can always come in the form of refactoring22:25
bhall_vishy: in quantum we have a "TESTING" file that talks about that.. might be good to have one in nova, too22:25
vishyI'd like to see a nova contribution guidelines page22:26
sandywalshwell that brings up the issue of unit tests vs. integration tests in the core test suite (nearly everything is integration currently)22:26
vishythat we can link to from the gerrit instructions22:26
tr3buchetit would be lovely to pare that down to unittests only22:26
vishysandywalsh: we need major unit test cleanup.  That is one of the large sections that doesn't ahve blueprints22:26
vishydo we need  a team for unit test cleanup?22:27
jk0gotta admit, it was pretty nice back in the day when tests took less than a minute to run22:27
vishyjk0: ikr22:27
tr3buchetthe good old days22:27
vishydoes anyone care about testing cleanup enough to spearhead an effort on it?22:27
sandywalshvishy, even if we could break it into smoke tests (pure unit tests) and "other" ... would be a great start22:27
vishyI know ntt is adding more and more tests22:28
jk0I think that would eventually help coverage too22:28
jk0I'm afraid these complex "unit" tests scare people away from writing new ones22:28
comstudvishy: Sooner rather than later would be good.22:28
sandywalshdon't we need to unify our integration test efforts?22:28
comstudvishy: I feel like I lose a lot of dev time with how long the tests take22:28
vishysandywalsh: yes but there is a team focusing on that22:29
sandywalshcomstud, and people trying to understand the tests when something breaks22:29
vishycomstud: I've been trying to parallelize22:29
comstudvishy: yea22:29
tr3buchetcomstud, jk0: you can run individual modules22:29
comstudtr3buchet: And I do22:29
jk0well yeah22:29
tr3buchetoh even then22:29
vishythis seems like a big discussion22:29
sandywalshdevstack to parallelize running the tests :D22:29
jk0but you still have to run the entire suite before proposing merge22:29
comstudtr3buchet: I run individual test cases sometimes as well..22:29
vishydoes anyone have a way forward on this?22:29
tr3bucheti usually only end up running the "big test" after i'm finished and the module tests pass22:30
vishyI'm pretty full with the blueprint and subteam stuff right now22:30
tr3bucheti think sandywalsh had a good point22:30
comstudright now.. i have something i'm working on..22:30
comstudi'm not sure which tests will break exactly22:30
comstudi have a rough idea22:30
vishyso I don't have the brain power to come up with a plan here22:30
tr3buchetbreaking the tests into unittests and other22:30
comstudso I run them all22:30
comstudwait fora  breakage.. fix it by re-running the broken module22:30
comstudthen re-run the whole test suite again22:30
vishytr3buchet: do you want to lead that effort?22:30
comstudwait for the next breakage22:30
vishyI'm happy for a test cleanup team to be formed22:31
tr3buchetvishy: if it comes down to it22:31
vishyok i will create the team and add tr3buchet , comstud , sandywalsh to it22:31
tr3buchetvishy: sounds like i've got a good bit of work to do in the melange area22:31
comstudi nominate dprince.. because he seems to not be here22:31
jk0I'd like to see bcwaldon on it too22:32
vishyperhaps titan can take the lead on that22:32
jk0he seems to know his way around the tests22:32
vishy#action vishy to try to find a lead for testing cleanup team22:32
comstudi'm mostly serious about dprince, because he seems to have use passion for tests22:32
vishys1rp_: seems to care a lot too22:32
tr3buchetdon't forget jaypipes22:33
jk0agreed, s1rp_ and waldon come to mind when I think 'tests'22:33
* comstud queues 'we care a lot'22:33
sandywalshI'm busier than a two-headed woodpecker ... don't know how valuable my contributions will be (from here)22:33
vishythere is already a qa team22:33
tr3buchetyea that's my thought too sandywalsh22:33
vishysandywalsh: I'm going to be on all of these teams, so yay me22:33
sandywalshvishy, that's your lot in life though22:33
tr3buchethey you let them talk you into being the king of openstack22:33
vishytr3buchet: :(22:34
vishyok i will create these teams22:34
vishyand we will see how things shake out over the next week22:34
tr3buchetare we going to have blueprints?22:34
vishytime to finish this meeting22:34
*** openstack changes topic to "Openstack Meetings: | Minutes:"22:34
openstackMeeting ended Thu Oct 13 22:34:44 2011 UTC.  Information about MeetBot at . (v 0.1.4)22:34
openstackMinutes (text):
vishytr3buchet what do you mean?22:34
tr3buchetfor the test changes22:35
pvothanks Vish.22:35
comstudthanks Vish22:35
PhilDayThanks Vish22:35
tr3buchetwhat's happening right now22:35
vishyyw guys22:35
sandywalshthanks vishy !22:35
vishyi will summarize on mailing list22:36
vishyI will assign all the blueprints i can to teams22:36
vishyany that don't fit i will manage22:36
*** PhilDay has quit IRC22:36
vishysubteams add blueprints that don't exist yet22:36
vishy(for upgrades for example)22:36
comstuds/manage/make tr3buchet do/22:36
tr3buchetoh yes, by all means22:37
comstudi know your boss22:37
comstudi can make it happen.22:37
tr3bucheti'm bigger than him22:38
comstudtaller, anyway22:38
tr3buchetactually.. we're both bigger than you22:38
comstudbut that's not difficult22:38
*** jdg has quit IRC22:39
*** jdg has joined #openstack-meeting22:39
*** NeilJohnston has quit IRC22:39
*** masumotok has quit IRC22:39
*** rjh has quit IRC22:39
*** jk0 has left #openstack-meeting22:40
*** mikeyp-3 has quit IRC22:42
*** primeministerp1 has quit IRC23:12
*** dragondm has quit IRC23:16
*** joonwon has joined #openstack-meeting23:29
*** joonwon has quit IRC23:51

Generated by 2.14.0 by Marius Gedminas - find it at!