Thursday, 2012-05-10

*** jpipes is now known as jaypipes-afk00:02
*** ryanpetrello has quit IRC00:02
*** dolphm has joined #openstack-meeting00:03
*** mcclurmc_ has quit IRC00:06
*** dolphm has quit IRC00:07
*** dtroyer is now known as dtroyer_zzz00:08
*** deshantm has quit IRC00:09
*** mcclurmc_ has joined #openstack-meeting00:10
*** littleidea has joined #openstack-meeting00:12
*** lloydde has quit IRC00:12
*** edygarcia has joined #openstack-meeting00:14
*** ryanpetr_ has quit IRC00:21
*** anderstj has quit IRC00:22
*** deshantm has joined #openstack-meeting00:23
*** asdfasdf has quit IRC00:23
*** nati_ueno has joined #openstack-meeting00:29
*** asdfasdf has joined #openstack-meeting00:37
*** nati_ueno has quit IRC00:37
*** asdfasdf has quit IRC00:40
*** joearnold has joined #openstack-meeting00:40
*** joearnold has quit IRC00:44
*** Adri2000 has joined #openstack-meeting00:48
*** Adri2000 has quit IRC00:48
*** Adri2000 has joined #openstack-meeting00:48
*** mcclurmc_ has quit IRC00:48
*** edygarcia has quit IRC00:54
*** joearnold has joined #openstack-meeting00:59
*** joearnold has quit IRC01:02
*** milner has quit IRC01:11
*** mcclurmc_ has joined #openstack-meeting01:14
*** ywu has joined #openstack-meeting01:15
*** dtroyer_zzz is now known as dtroyer01:17
*** mcclurmc_ has quit IRC01:20
*** nati_ueno has joined #openstack-meeting01:30
*** jog0 has quit IRC01:35
*** Mandell_ has quit IRC01:35
*** ayoung has quit IRC01:35
*** ryanpetrello has joined #openstack-meeting01:36
*** ryanpetrello has quit IRC01:41
*** alpha_ori has joined #openstack-meeting01:50
*** alpha_ori has left #openstack-meeting01:50
*** Weighed has quit IRC02:03
*** danwent has quit IRC02:10
*** nati_ueno has quit IRC02:25
*** deshantm has quit IRC02:39
*** mnewby has joined #openstack-meeting02:58
*** ryanpetrello has joined #openstack-meeting03:00
*** nati has joined #openstack-meeting03:01
*** littleidea has quit IRC03:02
*** littleidea has joined #openstack-meeting03:03
*** nati has quit IRC03:04
*** nati has joined #openstack-meeting03:04
*** nati has quit IRC03:10
*** joearnold has joined #openstack-meeting03:20
*** nati has joined #openstack-meeting03:25
*** anderstj has joined #openstack-meeting03:33
*** ywu has quit IRC03:35
*** danwent has joined #openstack-meeting03:42
*** joearnold has quit IRC03:43
*** joearnold has joined #openstack-meeting03:44
*** nati has quit IRC03:53
*** danwent has quit IRC03:55
*** jakedahn is now known as jakedahn_zz04:08
*** nati has joined #openstack-meeting04:11
*** danwent has joined #openstack-meeting04:11
*** Mandell has joined #openstack-meeting04:30
*** markvoelker has quit IRC04:36
*** ttx has quit IRC04:36
*** ttx has joined #openstack-meeting04:36
*** ttx has joined #openstack-meeting04:36
*** kpepple_ has quit IRC04:36
*** devcamcar has quit IRC04:36
*** kpepple has joined #openstack-meeting04:36
*** dragondm has quit IRC04:37
*** devcamcar has joined #openstack-meeting04:37
*** westmaas has quit IRC04:37
*** westmaas has joined #openstack-meeting04:37
*** anotherjesse_zz has quit IRC04:37
*** garyk has quit IRC04:38
*** dragondm has joined #openstack-meeting04:38
*** anotherjesse has joined #openstack-meeting04:39
*** anderstj has quit IRC04:43
*** Mandell_ has joined #openstack-meeting04:43
*** Mandell has quit IRC04:43
*** sdague has quit IRC05:09
*** sdague has joined #openstack-meeting05:09
*** novas0x2a|laptop has quit IRC05:09
*** nati has quit IRC05:12
*** garyk has joined #openstack-meeting05:31
*** primeministerp has quit IRC05:35
*** primeministerp has joined #openstack-meeting05:42
*** dtroyer is now known as dtroyer_zzz05:44
*** jakedahn_zz is now known as jakedahn05:44
*** mnewby has quit IRC06:01
*** GheAway is now known as GheRivero06:01
*** mnewby has joined #openstack-meeting06:03
*** danwent has quit IRC06:10
*** GheRivero is now known as Ghe06:15
*** Ghe is now known as Ghe_Rivero06:15
*** mcclurmc_ has joined #openstack-meeting06:26
*** Mandell_ has quit IRC06:26
*** ttrifonov_zZzz is now known as ttrifonov06:29
*** joearnold has quit IRC06:50
*** mnewby has quit IRC07:23
*** garyk has quit IRC07:43
*** darraghb has joined #openstack-meeting07:51
*** garyk has joined #openstack-meeting07:51
*** littleidea has quit IRC07:53
*** derekh has joined #openstack-meeting07:59
*** Ghe_Rivero has quit IRC08:28
*** jakedahn is now known as jakedahn_zz08:32
*** journeeman has joined #openstack-meeting08:36
*** adjohn has joined #openstack-meeting08:36
*** ttrifonov is now known as ttrifonov_zZzz09:10
*** ttrifonov_zZzz is now known as ttrifonov09:14
*** ryanpetrello has quit IRC09:16
*** GheRivero has joined #openstack-meeting09:21
*** adjohn has quit IRC09:43
*** mikal has quit IRC10:09
*** mikal has joined #openstack-meeting10:11
*** markvoelker has joined #openstack-meeting11:44
*** littleidea has joined #openstack-meeting12:11
*** sandywalsh has joined #openstack-meeting12:15
*** dprince has joined #openstack-meeting12:24
*** ryanpetrello has joined #openstack-meeting12:26
*** littleidea has quit IRC12:26
*** garyk has quit IRC12:32
*** dhellmann has quit IRC12:35
*** ryanpetrello has quit IRC12:38
*** dolphm has joined #openstack-meeting12:40
*** garyk has joined #openstack-meeting12:46
*** blamar has joined #openstack-meeting12:56
*** dolphm has quit IRC12:56
*** rkukura has quit IRC13:00
*** garyk has quit IRC13:16
*** nati has joined #openstack-meeting13:22
*** jsavak has joined #openstack-meeting13:25
*** dtroyer_zzz is now known as dtroyer13:28
*** markmcclain has quit IRC13:29
*** lloydde has joined #openstack-meeting13:29
*** nati has quit IRC13:31
*** ayoung has joined #openstack-meeting13:38
*** dachary has joined #openstack-meeting13:45
*** ryanpetrello has joined #openstack-meeting13:46
*** jsavak has quit IRC13:46
*** jsavak has joined #openstack-meeting13:46
*** rkukura has joined #openstack-meeting13:48
*** markvoelker has quit IRC13:56
*** joe-savak has joined #openstack-meeting13:56
*** jsavak has quit IRC13:59
*** edygarcia has joined #openstack-meeting14:05
*** lloydde has quit IRC14:06
*** markmcclain has joined #openstack-meeting14:09
*** oubiwann1 has joined #openstack-meeting14:14
*** dhellmann has joined #openstack-meeting14:15
*** anderstj has joined #openstack-meeting14:16
*** jgriffith has joined #openstack-meeting14:20
*** Mandell has joined #openstack-meeting14:22
*** dwalleck has joined #openstack-meeting14:23
*** garyk has joined #openstack-meeting14:44
*** dhellmann has quit IRC14:48
*** dhellmann has joined #openstack-meeting14:48
*** oubiwann2 has joined #openstack-meeting14:48
*** oubiwann1 has quit IRC14:49
*** ryanpetr_ has joined #openstack-meeting14:49
*** ryanpetrello has quit IRC14:49
*** edygarcia has quit IRC14:50
*** edygarcia has joined #openstack-meeting14:53
*** anderstj has quit IRC14:54
*** Gordonz has joined #openstack-meeting14:55
*** primeministerp has quit IRC14:56
*** primeministerp has joined #openstack-meeting14:57
*** dtroyer is now known as dtroyer_zzz14:57
*** joearnold has joined #openstack-meeting15:01
*** rnirmal has joined #openstack-meeting15:04
*** Mandell has quit IRC15:04
*** dachary has quit IRC15:08
*** primeministerp has quit IRC15:09
*** ayoung has quit IRC15:09
*** littleidea has joined #openstack-meeting15:09
*** dachary has joined #openstack-meeting15:09
*** Gordonz has quit IRC15:09
*** Gordonz has joined #openstack-meeting15:10
*** primeministerp has joined #openstack-meeting15:17
*** ayoung has joined #openstack-meeting15:18
*** joearnold has quit IRC15:19
*** joearnold has joined #openstack-meeting15:22
*** lloydde has joined #openstack-meeting15:22
*** danwent has joined #openstack-meeting15:22
*** adjohn has joined #openstack-meeting15:24
*** adjohn has quit IRC15:24
*** dachary has quit IRC15:26
*** jaypipes-afk is now known as jaypipes15:26
*** jamespage has joined #openstack-meeting15:26
*** jaypipes has quit IRC15:27
*** Gordonz_ has joined #openstack-meeting15:28
*** Gordonz has quit IRC15:29
*** joearnold has quit IRC15:30
*** jaypipes has joined #openstack-meeting15:33
*** dtroyer_zzz is now known as dtroyer15:39
*** dachary has joined #openstack-meeting15:44
*** egallen has joined #openstack-meeting15:45
*** milner has joined #openstack-meeting15:46
*** lloydde has quit IRC15:47
*** sprintnode has joined #openstack-meeting15:53
*** DuncanT has joined #openstack-meeting15:55
*** dwalleck has quit IRC16:00
jaypipesdachary: afternoon. :)16:00
openstackMeeting started Thu May 10 16:00:22 2012 UTC.  The chair is dachary. Information about MeetBot at
dachary#chair nijaba dachary16:00
dachary#meetingname ceilometer16:00
dachary#topic actions from previous meetings16:00
dachary#info dachary removed obsolete comment about floating IP
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
dachary#info dachary o6 : note that the resource_id is the container id.
openstackCurrent chairs: dachary nijaba16:00
openstackThe meeting name has been set to 'ceilometer'16:00
*** journeeman has quit IRC16:00
*** openstack changes topic to "actions from previous meetings"16:00
dacharyjaypipes: hi16:00
dacharyjd___: actions ?16:00
dacharynijaba: actions ?16:01
nijaba#info The discussion about adding the source notion to the schema took place on the mailing list
nijaba#info The conclusion was to add a source field to the event record, but no additional record type to list existing sources.16:01
jaypipesnijaba: could you explain that a bit more please?16:02
jaypipesnijaba: what are existing sources?16:02
nijabajaypipes: sources could be different installation of openstack, or metering of other projects not sharing their creds with keystone16:03
jd___#info jd___ add Swift counters, add resource ID info in counter definition, describe the table
*** Dan__ has joined #openstack-meeting16:03
*** Dan__ is now known as Guest3230716:03
*** flacoste has joined #openstack-meeting16:03
jaypipesnijaba: k, got it. so basically, a source field that is NULLable.16:04
*** woorea has joined #openstack-meeting16:04
nijabajaypipes: or set to a default, as the implementor prefers16:04
dachary#topic meeting organisation16:04
dachary#info This is 2/5 meetings to decide the details of the architecture of the Metering project
dachary#info Today's focus is on the definition of external REST API16:04
dachary#info There has not been enough discussions on the list to cover all aspects and the focus of this meeting was modified to cope with it.16:04
dachary#info The meeting is time boxed and there will not be enough time to introduce inovative ideas and research for solutions.16:04
dachary#info The debate will be about the pro and cons of the options already discussed on the mailing list.16:04
*** openstack changes topic to "meeting organisation"16:04
*** ss7pro has joined #openstack-meeting16:04
dacharycomments anyone ?16:04
nijabadachary: on which topic? ;)16:05
dacharyorganization ;-)16:05
*** lloydde has joined #openstack-meeting16:05
* nijaba +1 the org16:05
dachary#topic API defaults and API extensions16:05
*** openstack changes topic to "API defaults and API extensions"16:05
jaypipesMy only comment is that I believe Ceilometer shouldn't invent its own API extensions mechanism... it should use the system in Nova.16:05
dhellmann+1 jaypipes16:05
dacharyjaypipes: +116:05
ss7pro+1 jaypipes16:05
* nijaba had no idea this was going on, so +116:05
jaypipesit has its rough edges, but it gets you 90% of the way there.16:05
dhellmannI propose we table "extensions" for now and concentrate on the core API pending further discussion of extensions on the list.16:06
jd___+1 jaypipes16:06
jaypipesdachary: also, it might just be my misunderstanding, but I want to make sure that API extensions and plugins are clearly delineated.16:06
*** ryant has joined #openstack-meeting16:06
jaypipesdachary: the description in the mailing list thread of API extensions seems to bleed a bit into plugin land. :)16:07
dacharywell, I kind of assume we only need plugins for the purpose of implementing API extensions16:07
nijabathat's my understanding as well16:07
jaypipesEssentially, things like backend stores and such should not be API extensions, but rather plugins that use an adapter/driver model to have a pluggable implementation, using that same external API16:07
dacharywhich may not be true but I was only thinking about the API at the time16:07
nijabathe other type of "plugins" being agents16:07
dhellmannwe can also use plugins to add event monitors and polling to the agents running on the compute nodes16:08
jaypipesok, just wanted to make sure things like /extensions/MongoDbBackend/ etc weren't being considered...16:08
nijabadhellmann: polling?  the whol model we are discussing is push...16:08
*** markvoelker has joined #openstack-meeting16:08
dacharyAs far as the API is concerned, my suggestion was that each API extention is implemented as a plugin with a predefined interface.16:08
dhellmannwell, we're going to have to poll libvirt, right?16:08
wooreasome effort has been done in
nijabadhellmann: an agent should poll libvirt and push16:09
dachary#agreed Ceilometer shouldn't invent its own API extensions mechanism... it should use the system in Nova.16:09
jaypipesdachary: k16:09
dhellmannnijaba, exactly. There may be other things that we want to/need to poll, though.16:09
nijabadhellmann: right16:09
sprintnodewould nova-instancemonitor be useful ?16:09
dacharywoorea: and recently improved by
jaypipesnot sure who brought it up on the ML, but I also agreed with the statement that ceilometer should try as much as possible to disaggregate the concept of collection from the concept of aggregation or reporting.16:10
nijabasprintnode: yes, but let's save this for the agent discussion16:10
dhellmannnijaba, so if we define a plugin API for all of the things that poll and another for things that care about notification events then it is easy to add new counters16:10
dacharyjaypipes: there seems to be a consensus on that (aggregation != collection)16:10
ss7probut those mumin plugins are using sql db directly16:11
*** Weighed has joined #openstack-meeting16:11
ss7pronova db16:11
wooreadachary +116:11
nijabadhellmann: here we are talking about the external rest API, not the internal agent API that will be discussed on the 24th16:11
dhellmannnijaba, sure, I'm speaking more generally about plugins: Use the nova system and use it everywhere.16:11
nijabadhellmann: ah, makes sense then :)16:12
dachary#action dachary add info to the wiki on the topic of poll versus push16:12
dhellmannso, should we discuss the core API?16:12
dacharylet's move on to the next topic16:12
dachary#topic API defaults16:13
*** openstack changes topic to "API defaults"16:13
dachary#info GET list components16:13
dachary#info GET list components meters (argument : name of the component)16:13
dachary#info GET list [user_id|project_id|source]16:13
dachary#info GET list of meter_type16:13
dachary#info GET list of events per [user_id|project_id|source] ( allow to specify user_id or project_id16:13
dacharyor both )16:13
dachary#info GET sum of (meter_volume, meter_duration) for meter_type and [user_id|project_id|source]16:13
dachary#info other ?16:13
dacharythis is the current list in the wiki16:13
dhellmannwould "GET list of events" allow for filtering by event type?16:13
dacharyI'm under the impression that there is thin line between the "core API" and the "extensions"16:13
*** Divakar has joined #openstack-meeting16:13
nijabadachary: there was a proposal to allow queries for user_id && project_id16:13
nijabafor anyone of thecounters16:14
dhellmannfor example, I may want to charge a user a flat rate to create an instance and then a separate rate for keeping it alive for a period of time. So I need to know about creation events and aggregated runtime16:14
dachary#info GET list of events per user_id && project_id16:14
wooreafor me : sum meter_volume and meter_duration is aggregation, not collector16:14
jaypipesDoesn't quite look like a RESTful API that is similar to the other OpenStack APIs...16:14
nijabajaypipes: what would you suggest?16:15
jaypipesnijaba: perhaps it is just me not understanding :) I was thinking of an API like GET /components, GET /components/<COMPONENT_ID>, GET /components/<COMPONENT_ID>/events, etc16:15
dhellmannwhat are "components"?16:15
dacharydhellmann: swift, nova etc.16:16
jaypipesdhellmann: I assume a component was "nova-compute" or "nova-network", etc16:16
dhellmanndachary, is that the "source" field?16:16
nijabadhellmann: no16:16
wooreafor me source is the host16:16
nijabadhellmann: source should be unique per AUTH system, not per component16:16
dacharyit's the Component column of the above link dhellmann16:17
nijabawoorea: nor the host16:17
dhellmannaha, I didn't realize that was a key piece of information16:17
dhellmannwhy would a client want that list?16:17
jaypipesdachary: may I suggest renaming "meter" to "metric"?16:17
nijabadhellmann: it was a suggestion from doug at hp yesterday on the ml16:18
dacharyjaypipes: the proposal is poorly formated because we focus on the semantic. However, I fully agree that it should be a PATH ( or arguments I don't mind ) from which the parameters to the query are parsed.16:18
*** garyk has quit IRC16:18
jaypipesdachary: gotcha. no probs.16:19
dacharyjaypipes: we renamed counter into meter during the last meeting ;-) I'm ok with metric too (no strong feelings on names) but I'm not sure it will be readable.16:19
WeighedWould a network xmit counter be the network traffic sent over the most recent hour, sent since the VM was booted, or sent since the VM host was booted?16:19
* dachary not being a native english speaker does not help ;-)16:20
jaypipesdachary: :) no worries16:20
ss7proWeighed: xmit is a delta16:20
* jaypipes would prefer counter or metric to meter, but not a big deal16:20
ss7proWeighed: generaly we store only deltas at this moment16:20
nijabaWeighed: whatever the duration specifies, but should be a delta from the last measure16:20
WeighedSo the client cannot select a duration?16:21
dacharynijaba: on a delta from the last measure ;-)16:21
ss7proWeighed: Client can16:22
nijabaWeighed: yes it can, in the sum API16:22
Guest32307time interval should be part of the query and drive the results16:22
ss7proBut it will return delta sum for given period16:22
jd___having a delta assume you have the old value if you're polling from absolute counter, which may not be the case on agent restart16:22
nijabaGET sum of (counter_volume, counter_duration) for counter_type and account_id16:22
nijabaoptional start and end for counter_datetime16:22
dhellmannshould the query for a list of events allow filtering by type?16:22
dacharyWeighed: the client must be able to select a duration. Actually I think (start + end) should be a common parameter to all queries.16:22
nijabathis is what is specified in the wiki16:22
dhellmannor is that implied in that you ask each meter for the list?16:23
nijabasame applies to list16:23
nijabaso I agree with dachary16:23
dhellmannhow are "get list of meter_type" and "list components meters" different?16:24
*** Guest32307 has quit IRC16:24
nijabadhellmann: list component meter will restrict the query to a cmpnent16:24
ss7proI agree but we need also to decide if the end pointer is a end of current window or the begining of the next window (less, less or equal)16:24
dacharydhellmann: I think the query for a list of events should allow filtering by type16:24
*** Gordonz_ has quit IRC16:25
dacharyss7pro: I tend to link [start,end[16:25
*** DanD_ has joined #openstack-meeting16:25
dhellmannnijaba, I'm still trying to understand how the component part of the API is useful. I'll have to find that email thread.16:25
* nijaba too16:25
ss7prook so end is a closing value16:25
dhellmanndachary, is that start <= timestamp < end?16:25
dachary#agreed all meters have a [start,end[  ( start <= timestamp < end ) that limits the returned result to the events that fall in this period16:26
dachary#agreed all queries have a [start,end[  ( start <= timestamp < end ) that limits the returned result to the events that fall in this period16:26
*** littleidea has quit IRC16:26
wooreawe need event_type (eg : start or end) + timestamp16:27
wooreanor start + end + event_type16:27
dacharyThere is one query that everyone agrees on, I think : GET /events that returns raw events.16:27
*** dwalleck has joined #openstack-meeting16:27
*** Gordonz has joined #openstack-meeting16:28
ss7proWhat are raw events ?16:28
dhellmannnijaba, thanks16:28
ss7prolist of deltas ?16:28
nijabass7pro: what is stored in the DB16:28
dhellmannss7pro, yes, the discrete values recorded in the database16:28
dacharyss7pro: sorry for being imprecise16:28
nijabass7pro: with no aggreagation16:28
woorearaw is unprocessed info, just collected16:28
wooreano business rules applied16:28
dacharyThere is one query that everyone agrees on, I think : GET /events that returns all fields for each event ( as described in )16:28
ss7prowhat about components ?16:29
DanD_raw events are determined by the service you query. nova= vm state changes, network usage, block storage create/delete ...16:29
dhellmannthe list of components is just a list of strings for the names, right?16:29
ss7prohow do we like events to components ?16:29
dhellmannss7pro, the meter type defines the component16:29
DanD_events need a serviceTypeId associated with them16:29
dacharyDanD_: we're talking about the events stored in the ceilometer storage, not the events sent by the nova component (for instance)16:30
wooreacomponents generate events that are collected by "counters" (raw data) and the processed by business process16:30
DanD_I know, but you still need to have the meta data to determine what to return on a query16:30
ss7prodhellmann: But what part of the code will decide which counter belongs to which component ?16:30
ss7proeg: external network traffic ?16:30
dhellmannss7pro, the code that defines the counter16:31
wooreadhellmann +116:31
dacharyss7pro: that's GET list of meter_type : return the list of all meters available . It describes the available meters as shown in
dhellmannthe thing that actually collects the data16:31
ss7prodhellmann: so it will require collector to be able to query openstack api16:31
*** whitt has joined #openstack-meeting16:31
dhellmannI would actually prefer to leave components out of the API entirely. Focusing just on the meters would let other systems inject data for aggregation without worrying about where it comes from.16:32
dhellmannss7pro, I don't understand that conclusion16:32
dacharydhellmann: the "component" part of is merely a hint16:32
wooreaa collector can query openstack api, libvirt, logs, whatever16:32
*** reed has joined #openstack-meeting16:32
nijabadhellmann: I think covering copomnent will have little effect as long as it is an option in the query, not a req16:32
ss7proSo how to guess what is the component for traffic to/from single IP address ?16:32
dhellmanndachary, it was until we added to the API. If the API has to be able to provide a list of components and has to know which meters are part of which component, then we have to store that information somewhere the API can find it.16:33
dacharywoorea: yes. And then it passes along the information to the storage that stores it as described in
dhellmannss7pro, a human would look at the documentation16:33
*** joearnold has joined #openstack-meeting16:33
DanD_metrics need to have a set of meta data associated with them so you can determine how to apply billing farther downstream. the component or service type along with other things like location, type, ... all contribute to the charges you will apply to the metric16:33
ss7probut how collector can do this without API query ?16:34
wooreai sent an arch diagram to the list yesterday16:34
dacharydhellmann: I agree. The information in must be stored somewhere. I'm not sure where. Database ? Configuration file ? Configuration file specific to an API extension ?16:34
wooreawhere you can see the scopes of every component16:34
dacharywoorea: I missed it could you link the mail ?16:34
dhellmannDanD_ why does it matter that "quantum" collected billing data for me to calculate the bill? The meter type should be enough, right? "Network traffic in/out"16:34
Divakarall the metrics needs to be associated with a resource16:34
ss7proDivakar: But how to gues resource without nova API query ?16:35
ss7proIf we take example of counting traffic for single ip address ?16:35
dacharyDanD_: in my mind the are metadata common to all rows found in and that can be looked up using the meter_type16:35
dhellmanndachary, maybe the code that defines the meters should provide a plugin for the API service to add a component name? we can work that out on the list, though.16:35
Divakarone needs to have a list of resources16:35
*** ryant has quit IRC16:35
DanD_<dhellman> we charge differently depending on some of the characteristics of the service that we are metering. i.e. what data center it is in, ...16:36
Divakarit can be separate api which provides the inventory data16:36
dacharyDivakar: yes, that's the resource_id field of each record from
nijabadachary: I think it is always a good idea to always store the dictionnary with the data anyway16:36
*** sprintnode has quit IRC16:37
dhellmannDanD_, that's a reasonable point. Somewhere in the spec there is a notion of "extra" data associated with each metering event, but that is not exposed in the aggregation API16:37
wooreahere the diagram:
dacharydhellmann: ok16:37
*** byeager has joined #openstack-meeting16:37
dhellmannah, right, dachary, the resource_id can lead to that other information16:37
dhellmannso we should be able to aggregate by resource_id16:38
nijabadhellmann: good point16:38
dacharywoorea: thanks16:38
dhellmannalthough if a resource does not exist at the point of billing (because the instance was destroyed, for example) that might not be enough16:38
DanD_dhellman, if you don't allow the API queries to filter based on the criteria you use for billing, how do you seperate the data after the fact?16:38
dhellmannDanD_, also a good point. I was expecting to pull the raw data out and "translate" it to the type of data we need in our existing billing system.16:39
*** oubiwann2 has quit IRC16:39
*** alpha_ori has joined #openstack-meeting16:40
dhellmannDanD_, is a component for you just the name "compute" or is a specific instance of a compute node?16:40
WeighedIf a VM is disabled, would its CPU use be 0 or be NaN?  0 is OK for billing needs, but for diagnostics it is good to know the difference between down and 016:40
dacharydhellmann: true. However, the billing is expected to extract meta data information independently. Otherwise we will end up replicating the full logs / archive all events from all components and providing a database of all historical events that ever happen in openstack. I believe that was agreed on during the last meeting.16:41
DanD_depends on what you define as aggregation  I guess. If you plan to just pull relatively raw data out of the API, then that works. but if you are looking to get something like, how much large vm usage did account x consume in data center 1 then its harder16:41
dhellmanndachary, because we are dealing with ephemeral objects, we might have to collect that data16:41
dhellmannDanD_, we want to be able to report for our customers how much they spent on each VM, not just how much on a type of VM16:41
dhellmannso we need both16:41
nijabaWeighed: metering != monitoring: I would not do diagnostic with it16:41
DanD_yes, I agree16:42
* nijaba agrees too16:42
nijabawe then need to expose ressource_id to the query...16:42
Divakarsince the metrics is going to be provided as samples if the vm is down for a particular period of time, there will be no sample isnt it?16:42
dacharyand we do16:42
dhellmannDanD_, location of the resource is not something we've discussed collecting but I think we need to add that16:42
DanD_we differentiate on region, data center and availability zone as well as the characteristics of the VM for compute16:43
nijabadhellmann: yes we do, it is the resource_id isn't it?16:43
dhellmannwe might want to do the same16:44
dhellmannnijaba, I thought the resource ID was the UUID of the actual object (the instance, for example)16:44
dacharydhellmann: we need a pointer to the resource, that is unique. That's what resource_id provide. Matching this unique id to the actual resource is outside of the scope of the metering project. If we try to fit that in, we will never complete the project I'm afraid ;-)16:44
dhellmannthe billing system can only query for the other information if that object still exists, which it may not16:44
nijabadhellmann: ok, so location as in zone...  got it...16:44
dacharydhellmann: yes,  resource ID was the UUID of the actual object (the instance, for example)16:44
*** dwalleck has quit IRC16:45
dhellmanndachary, well, I'm afraid I'm with DanD_ on this one16:45
wooreapull(rest api) or push(driver) are the options for a billing system to integrate with ceilometer16:45
nijabadachary: but I would think that the zone (which is a subset of datacenter) is indeed needed16:45
ss7prowoorea: billing will pull the data16:46
wooreausers can choose the way they want to work with ceilometer16:46
*** alpha_ori has left #openstack-meeting16:46
wooreass7pro: we should offer the two options16:46
Divakardachary: one should be able to corelate the metering data with the resource in use for which a unique identifier of the resource isnt it a must to have?16:46
dacharythat calls for a different storage and schema.16:46
nijabawoorea: not in what we are proposing atm, but you are welcome to propose16:46
DanD_if you use an external billing provider, then a pull model is not viable, not both options16:46
dhellmannin order to be able to audit the billing information, the user is going to want to know the names and unique ids of the things causing the charges. We need to record that at the time the charge is incurred. Each meter type will need to define what that data is16:47
nijabawoorea: but that should be done via the ml and discussed in a separate meeting16:47
wooreanijaba: supose that ceilometer is not visible from outside16:47
ss7pro+1 dhellmann16:47
wooreanijaba: ok16:47
dhellmannDanD_, we're building a bridge to pull data out of ceilometer and push it into our existing billing system.16:47
dhellmannprobably just a cron job16:47
dacharyI think we must acknowledge that one hour won't be enough to resolve this. We will need to keep discussing this on the list and resolve the points that were raised.16:48
dhellmannyou need a translation layer between those two pieces anyway because they are likely to have different views of the data16:48
nijabadachary: +116:48
dhellmanndachary, +116:48
DanD_that's basically what we do as well. The benefit of exposing a push model would be that it would provide some leverage to get billing providers to conform16:48
nijabadhellmann: would you take the action to reformulate the API proposal as a start point for the dicussion on the ML?16:48
dacharyI will post a summary of this discussion to the list so that we can start independant threads to address each issue. Do you agree on this ?16:49
nijabadhellmann: thanks :)16:49
dacharyok :-)16:49
dacharydhellmann: the action is on you, thanks ;-)16:49
dachary#action dhellmann reformulate the API proposal as a start point for the dicussion on the ML.16:49
*** lloydde has quit IRC16:50
dhellmann#action dhellmann: reformulate the API proposal as a start point for the dicussion on the ML16:50
nijabadachary: I think we need to push other topics by one week as a consequence....16:50
dacharyThat will give me time to think about the need to store meta data information and revisit the storage if it needs to be.16:50
*** Ravikumar_hp has joined #openstack-meeting16:50
dacharynijaba: +116:50
dhellmanndachary, +116:50
dachary#action dachary push next meetings one week16:50
ss7prodachary: metadata is needed16:50
*** Mandell has joined #openstack-meeting16:51
ss7procounting network traffic is a typical example16:51
dacharyss7pro: I see why it is. I can't figure out how it will actually work.16:51
dacharyss7pro: yes16:51
*** lloydde has joined #openstack-meeting16:51
nijabadachary: dictionary is the extended schema with data definition = metadata.16:51
dacharyit's easy to say : this is outside of the scope. But it makes it a lot more difficult for the billing.16:52
nijabadachary: what does?16:52
dacharynot storing metadata in the storage makes it more difficult for the billing to figure out what a resource_id relates to16:52
ss7prodachary: There's also one more thing that ip addresses assigned to instances may change. We need also to track which ip address belong to which instances as this data is not available now16:52
dacharyss7pro: that's what I'm afraid of. The extent of the "required metada" is virtually boundless.16:53
nijabadachary: agreed on the metadata16:54
dachary(not sure it's proper english but you get my meaning ;-)16:54
dacharyunless someone wants to add something, are we done ?16:54
nijabass7pro: why would we care about this?  will you bill differently on which instance an IP is attahced to?16:54
*** dwalleck has joined #openstack-meeting16:54
dacharynijaba: maybe not but it's an information that is valuable to the customer. To track the bandwidth usage for instance.16:55
DanD_we only bill based on internal and external traffic, but I could see where you would charge for incremental addresses16:55
*** epim has joined #openstack-meeting16:55
dacharyDanD_: yes :-)16:55
ss7pronijba: You need to know which customer generated traffic.16:55
dacharyss7pro: you will know that because tenant_id / project_id is part of the record.16:56
ss7proSo if instances are changing ip address (this is possible with quantum) you need to be sure that you charge the right customer16:56
nijabaDanD_: floating ip billing: yes, but billing per floating per instance_type seems far fetched16:56
dacharyeach meter is associated to a tenant.16:56
ss7prodachary: But collector needs to be aware of it, so it'll need to query nova API each time it's doing collection16:56
DanD_we have a component that filters traffic based on IP and tracks the total bytes16:57
dhellmannDanD_, we will probably be doing that, too16:57
dacharyand we will do that too.16:57
dacharyOur customers will want to know which IP is responsible for the most of the bandwidth used.16:58
*** nati has joined #openstack-meeting16:58
DanD_thats a lot harder16:58
dacharythank you for your participation. That was a very rich session :-)16:58
nijabatoo rich maybe ;)16:58
ss7proIt's also needed to differentiate beetwen internal and external traffic16:58
nijabathanks all!16:58
dhellmannthis is a complicated problem. :-)16:58
dacharynijaba: it shows the problem is not resolved ;-) That was no troll session.16:59
ss7prodhellmann: which problem ?16:59
*** openstack changes topic to "Status and Progress (Meeting topic: keystone-meeting)"16:59
openstackMeeting ended Thu May 10 16:59:20 2012 UTC.  Information about MeetBot at . (v 0.1.4)16:59
openstackMinutes (text):
*** ss7pro has left #openstack-meeting17:00
*** rohitk has joined #openstack-meeting17:00
*** JoseSwiftQA has joined #openstack-meeting17:00
*** Weighed has quit IRC17:00
*** donaldngo_hp has joined #openstack-meeting17:01
dwalleckhey QA folks, ready to get started?17:01
fattarsilet's do it17:01
openstackMeeting started Thu May 10 17:01:38 2012 UTC.  The chair is dwalleck. Information about MeetBot at
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.17:01
*** joe-savak has quit IRC17:01
*** Divakar has quit IRC17:01
dwalleck#topic Action items: getting the smoke test branch in gerrit17:02
*** openstack changes topic to "Action items: getting the smoke test branch in gerrit"17:02
*** ryanpetr_ has quit IRC17:02
dwalleckWhich Jay has done :) Pretty slick stuff17:02
rohitkyes, im waiting for that to go through :)17:02
*** derekh has quit IRC17:02
davidkranzWhat needs to happen to get it in?17:02
*** woorea has left #openstack-meeting17:03
*** longshot has joined #openstack-meeting17:03
dwalleckOne more review technically. I just wanted to give it enough time for folks to see things and get comfortable17:03
fattarsias it is I think it will conflict with my recent review17:03
rohitkdwalleck ++17:03
dwalleckOnce everyone's good, we should be good to go17:04
*** mnewby has joined #openstack-meeting17:04
dwalleckAny more questions/thoughts on the smoke tests?17:04
*** mnewby has quit IRC17:04
*** mnewby has joined #openstack-meeting17:05
rohitkWould we be getting rid of the decorators17:05
rohitkafter having the base Smoke class?17:05
*** dhellmann has quit IRC17:06
jaypipesrohitk: my thought was yes, because the decorators can be reused for other things (like positive vs negative, etc)17:06
jaypipesrohitk: and the base Smoke test class can automatically decorate its test methods with a smoke attr17:06
dwalleckrohitk: That depends. Personally, I internally use decorators to break my test groups down into finer grained groups, but we can leave that up to individual groups to tag if we like17:06
*** hggdh has joined #openstack-meeting17:06
rohitkfrankly I am not too comfortable with the attr decorators17:06
jaypipesrohitk: they are not consistently applied right now17:06
rohitkunless, they are used consistently17:06
jaypipesrohitk: and having the base classes decoarate automatically solves that problem..17:07
rohitkjaypipes +117:07
dwalleckWell, we're setting standards now (and we should probably document them), and we'll follow them from this point forward17:07
jaypipesrohitk: and allows the attrs to be more specifically used for targeting other things17:07
jaypipesdwalleck: +++17:07
dwalleckcan't fix the past, just the future :)17:08
jaypipesdwalleck: for instance, I'd like to have a @attr(bug=XXXX) standard where we can run tests based on a failing bug report, etc17:08
dwalleckWe can even have further discussions about what attrs make sense so that we're consistent17:08
dwalleckjaypipes: ++17:08
dwalleckdefinitely agree17:08
jaypipesdwalleck: and I'd rather not have to do: @attr(type='smoke', cls='positive', bug=XXXX) :)17:08
jaypipesgets very verbose ;)17:09
rohitkjaypipes: that would be helpful. tag tests based on test type/ bug linkage, etc17:09
jaypipesrohitk: right17:09
dwalleckSounds good to me17:09
jaypipescoolio. I will add that to the merge prop for the smoke tests (the auto-decorate thing...)17:09
jaypipesdavidkranz: FYI, stress test merge prop review done17:10
dwalleckAnd is everyone okay with using the @attr(bug=XXXX) for now?17:10
*** garyk has joined #openstack-meeting17:10
JoseSwiftQAI like it17:10
davidkranzjaypipes: Great. I'll patch it after the meeting.17:10
Ravikumar_hpi like it17:10
jaypipesdwalleck: I am, obviously :)17:10
dwalleckWell there you have it then, done and done :)17:11
fattarsithis would decorate each test?17:11
jaypipesfattarsi: it would decorate tests that were specifically hindered by a bug upstream17:11
dwalleckfattarsi: Only tests that expose/exercise a bug17:11
*** markmcclain has quit IRC17:11
jaypipesfattarsi: what dwalleck said :)17:11
rohitkthose test also need to be skipped right?17:11
fattarsiok cool17:11
jaypipesrohitk: up to when they are fixed, yep17:12
rohitkjaypipes: got it17:12
dwalleckskipped or failed, but there was still some discussion around which would be preferred...17:12
jaypipesrohitk: the bug=XXX decorator is more of an easy way to "run the test to check if bug XXX is now fixed"17:12
dwalleckOkay, on we go then17:13
davidkranzdwalleck: I think we need to skip if we are gating trunk, at least until tempest is embraced by other teams.17:13
jaypipesdavidkranz: that is correct.17:13
dwalleck#topic Outstanding code reviews17:13
*** openstack changes topic to "Outstanding code reviews"17:13
jaypipesRavikumar_hp: would you mind communicating with rajalakshmi about holding off on the volume filters merge prop for now?17:13
Ravikumar_hpjaypipes: sure . right now she is working on some other task17:14
jaypipesRavikumar_hp: dwalleck commented correctly on that merge prop that the API hasn't actually caught up to the proposed filtering functionality yet.17:14
jaypipesRavikumar_hp: gotcha17:14
dwalleckdavidkranz: I'm looking at yours today as well. I think I also reviewed anything that was still pending a review17:14
Ravikumar_hpso we will hold on volme attachment test17:14
davidkranzI would like to see the volumes attach stuff that has the drive letter problem go in. For now it could just use vdk,vdq,vdx until we have a better fix. That is probably safe.17:15
jaypipesfattarsi: did you catch my comment on #openstack-dev about assigning you to
uvirtbotLaunchpad bug 997685 in tempest "tests.identity.test_roles.RolesTest.test_role_create_blank_name Fails" [High,Confirmed]17:15
davidkranzThere is a volume stress test coming that will use this code.17:15
jaypipesdavidkranz: ++17:15
fattarsijaypipes: yes, is there a bug filed in keystone about this?17:15
dwalleckAnd the more folks who look at this, the better:
fattarsijaypipes: now that I look I cannot find one17:15
jaypipesfattarsi: I don't know yet if it is a bug in Keystone or not :)17:16
jaypipesdwalleck: will do that shortly.17:16
davidkranzdwalleck: I like the ssh thing but the issues about getting an address and ssh credentials is still a problem.17:17
rohitkThere are quite a few negative scenarios in keystone where expected Error codes are not returned17:17
fattarsijaypipes: in the meantime you think I should just that test until confirmed?17:17
davidkranzI can't run it now and the ubuntu images do not accept user/password to ssh as far as I can tell.17:17
fattarsijaypipes: then it won't be the only test failing17:17
dwalleckdavidkranz: But it's more of a deployment problem, not a test problem. That's why I added an attr for them, because of that very situation17:17
davidkranzIf it is going in without solutions to those problems there needs to be a config to skip the ssh part.17:18
*** mcclurmc_ has quit IRC17:18
davidkranzdwalleck: How do I use the attr to turn it off?17:18
jaypipesfattarsi: no, let's find out if it really is a mismatch of spec /bug in Keystone and work with the Keystone folks on a fix. In the meantime, sure, we can do a @skip("Bug XXX not fixed in Keystone"), sure17:18
davidkranzdwalleck: Sorry for my python/nose lameness.17:18
*** dachary has quit IRC17:18
dwalleck-a type!=ssh, which probably isn't the to look at. A config might be a better option, like we've done with resize17:19
davidkranzdwalleck: ++17:19
dwalleckI'm just very anxious to get this branch in as I have quite a large chunk of code I can submit once it's in17:19
dwalleckCool, I'll do that17:19
davidkranzdwalleck: OK with me.17:19
jaypipesdwalleck: ++17:19
dwalleckAnd default it to not run, just in case17:19
dwalleckSpeaking of merge props....17:20
dwalleck#topic Swift Tests17:20
*** openstack changes topic to "Swift Tests"17:20
davidkranzdwalleck: We might need another version of the ssh config to use keys. But that can wait.17:20
JoseSwiftQAwhat up17:20
JoseSwiftQAah, yes17:20
dwalleckJoseSwiftQA: you are =P17:20
JoseSwiftQAwhat I want to push is mostly code complete, needs some very minor additions + pep8 attention.  Should have it submitted by day's end with dwalleck's help.17:21
davidkranzJoseSwiftQA: Are you doing anything with swift ACLs?17:21
dwalleckWhich will be great to have in :) Good job man17:21
jaypipesJoseSwiftQA: nice work.17:21
JoseSwiftQANot in tempest, yet.  It's technically just adding metadata, but I want to add 'helper' functions for all that kind of stuff too17:22
JoseSwiftQApossibly in a 'middleware' client of some sort?17:22
jaypipesJoseSwiftQA: how would that work?17:22
davidkranzJoseSwiftQA: OK. When you figure out what the spec for that stuff is, please let us know :)17:22
JoseSwiftQAfor things like tempurl generation, you have to do some stuff with passwords, keys, hmac etc that are tedious and static17:23
JoseSwiftQAso I want to write a helper function to do that, but it really shouldn't live in object, container, or account17:23
dwalleckI'll make sure he survives the first commit process :)17:24
JoseSwiftQA^^main concern numero uno :D ^^17:24
uvirtbotJoseSwiftQA: Error: "^main" is not a valid command.17:24
dwalleck#topic Documenting/Reporting functional test coverage17:24
*** openstack changes topic to "Documenting/Reporting functional test coverage"17:24
dwalleckTricky stuff....17:25
dwalleckBut I saw this, and I generally like the concept for tracking functional test coverage
dwalleckThis feels right because it's a type of coverage you can manage regardless of where the dev cycle is17:26
*** rohitk has quit IRC17:26
dwalleckSo for example, for Nova I came up with attributes of functional, secure, robust, and responsive17:26
egallen /buffer #openstack-metering17:26
dwalleckWhich is much more descriptive than just positive/negative17:27
*** egallen has quit IRC17:27
jaypipesdwalleck: but what is the definition of responsive? :)17:27
jaypipesdwalleck: do we decorate methods with an expected time to complete?17:28
dwalleckjaypipes: Good question!17:28
jaypipesdwalleck: something like @attr(ttc<=2.0)17:28
dwalleckjaypipes: oh no no, I didn't mean to use these as decorators necessarily (though we could)17:28
jaypipesdwalleck: or just create a new decorator like @complete_in_less_than(2.0)17:28
Ravikumar_hpdo we fail the test if ttc is not met17:28
jaypipesRavikumar_hp: good question17:29
*** darraghb has quit IRC17:29
dwalleckI meant from a higher level, if someone asks us right now how much of each application Tempest covers...well, I can make up numbers :)17:29
davidkranzRavikumar_hp: I think the situations with testing on "real deploys" and virtual infrastructure are very different in this regard.17:29
jaypipesdwalleck: sorry to get you off track... what were your thoughts on how to apply ACC to Tempest tests?17:29
dwalleckBut if I can categorize what components/capabilities to attributes, I can show happy colored heatmaps that show where I have the least/most testing17:30
dwalleckThis still isn't perfect, but it helps more than me telling my management I have x smoke tests, y positive, z negative17:31
dwalleckMaybe I say I have 0 tests under nova api stability...they might freak :)17:31
dwalleckBut if I just say I have positive or negative tests, there's no context17:31
dwalleckI just wanted to bring this up. It's definitely not a finished thought yet, but out of everything I could think of, it was the thing I hated the least :)17:32
dwalleckfood for thought17:32
dwalleck#topic Development Blueprints for Folsom release, and test coverage for those implementations17:33
*** openstack changes topic to "Development Blueprints for Folsom release, and test coverage for those implementations"17:33
dwalleckRavikumar_hp: you're up!17:33
Ravikumar_hpmay be this is question:17:33
Ravikumar_hpHow we are tracking progress of development blueprint and making progress on addressing/adding testcases for those blueprint tasks17:33
Ravikumar_hpbasically we want to address all blueprints by sharing work17:34
*** ohnoimdead has joined #openstack-meeting17:34
jaypipesRavikumar_hp: you're talking about which blueprints?17:34
jaypipesRavikumar_hp: new stuff coming in Folsom in Nova/Glance/Swift, etc?17:34
Ravikumar_hpFolsom release - blueprints and developmane based on those blueprints17:34
jaypipesI see now...17:35
jaypipesRavikumar_hp: first thing we need is a decent list of those blueprints that are currently in progress.17:35
davidkranzRavikumar_hp: I would like to make sure we don't have tempest spending a lot of time duplicating stuff that is covered by unit tests that will go with these new projects.17:36
davidkranzHow do we draw that line?17:36
jaypipesdavidkranz: by ensuring that the developer does unit tests and QA does the functional test?17:36
davidkranzjaypipes: Right. But sometimes it is not so easy.17:37
davidkranzjaypipes: In pre-openstack life I always had QA people working more closely with the dev team than we seem to have here.17:37
*** gyee has joined #openstack-meeting17:38
Ravikumar_hpdaidkranz: ++ . i see a gap here.17:38
davidkranzIdeally, each blueprint would have a test plan which would greatly help with this issue.17:38
davidkranzThat plan could talk about what non-unit tests were needed.17:39
dwalleckSo if I can map blueprints to stories my developers are playing, I can do some mapping as I can17:39
jaypipesdavidkranz: I think that the QA team just needs to be more aggressive in working with the developers on functional and integration tests (and test plans) while development is going on (and right after development is complete)17:39
Ravikumar_hpDevelopement can add those (test plan - non unit tests )in blueprints17:40
davidkranzjaypipes: I agree, but the result needs to be written down somewhere17:40
jaypipesRavikumar_hp: why wouldn't the QA team add that to the buleprints?17:41
Ravikumar_hpsure . should we involve development to review that?17:42
jaypipesRavikumar_hp: yes, of course. get the dialog going now rather than later...17:42
*** hggdh has quit IRC17:42
jaypipesthe dialog can be on the blueprints and IRC of course...17:42
*** anderstj has joined #openstack-meeting17:43
Ravikumar_hpjaypipes: ok17:43
dwalleck#topic open discussion17:44
*** openstack changes topic to "open discussion"17:44
dwalleckWhat else folks?17:44
davidkranzI will be on vacation the next two weeks.17:44
jaypipesnothing from me17:44
jaypipesdavidkranz: enjoy!17:45
jaypipesRavikumar_hp: you are next week's QA Captain, FYI...
davidkranzI would like to look at the auto-reclaim of resources when I get back.17:45
dwalleckdavidkranz: I agree. I'm still bouncing implementations around in my head17:46
davidkranzdwalleck: OK. That's all from me.17:47
dwalleckGoing once?17:47
*** openstack changes topic to "Status and Progress (Meeting topic: keystone-meeting)"17:47
openstackMeeting ended Thu May 10 17:47:32 2012 UTC.  Information about MeetBot at . (v 0.1.4)17:47
openstackMinutes (text):
dwalleckHave 13 minutes of your day back :)17:47
*** dwalleck has quit IRC17:49
*** longshot has quit IRC17:51
*** whitt has quit IRC17:51
*** donaldngo_hp has quit IRC17:51
*** JoseSwiftQA has quit IRC17:51
*** flacoste has left #openstack-meeting17:57
*** pvo is now known as pvo-away17:59
openstackMeeting started Thu May 10 18:01:37 2012 UTC.  The chair is jgriffith. Information about MeetBot at
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.18:01
jgriffithRole call?18:01
jgriffithWow.. nobody?18:03
DuncanTPresent, couldn't possibly comment on being correct18:03
jgriffithDuncanT: pheww, just go with correct18:03
jgriffithWell I guess this might be rather short unless we get some folks late18:04
jgriffith#topic last weeks action items18:04
*** openstack changes topic to "last weeks action items"18:04
jgriffithrnirmal caught up with me this morning, and unfortunately didn't get much done last week18:04
jgriffithvladimir: you here?18:05
jgriffithThose were two we had, sounds like no updates18:05
MandellSorry I'm late.18:05
jgriffithMandell: no problem, glad you're here18:05
jgriffithSo vladimir was going to work on openstack common, haven't seen anything there18:06
jgriffithrnirmal was working on some compute tear out and api work, but didn't get to it last week18:07
jgriffithI've pushed a change to drop the instance relationship from volumes, need review :)18:07
jgriffithMandell: thanks!18:07
jgriffithStill needs approved etc18:07
DuncanTI saw that, haven't had time to look yet but will do tomorrow if not before18:08
jgriffithDuncanT: thanks!  much appreciated18:08
jgriffithwhich brings me to the next topic...18:08
jgriffith#topic reviews18:08
*** openstack changes topic to "reviews"18:08
*** mcclurmc_ has joined #openstack-meeting18:09
*** hggdh has joined #openstack-meeting18:09
jgriffithSo there hasn't been a "ton" of activity but...18:09
jgriffithThere are a couple items that have been submitted and just sit18:09
jgriffithvishy: Yo18:09
jgriffithJust want to make sure folks that signed up for core are able to help out here18:09
*** asdfasdf has joined #openstack-meeting18:10
jgriffithDoesn't seem right for me to review, +1, +2 and A my own changes :)18:10
DuncanTWe have out public-beta go-live today so it has been kind of hectic... will definitely make time to keep up18:10
jgriffithDuncanT: No worries, I understand18:10
jgriffithJust wanted to put a reminder out18:10
jgriffith#topic current status for cinder18:11
*** openstack changes topic to "current status for cinder"18:11
*** mcclurmc_ has left #openstack-meeting18:11
DuncanTIs there a dashboard for outstanding cinder reviews?18:11
DuncanTI've not used gerrit in anger all that much....18:11
jgriffithDuncanT: Youre core so should be getting emails, else just look in gerrit18:11
jgriffithNot a ton there but want to form good habbits :)18:12
*** jakedahn_zz is now known as jakedahn18:12
jgriffithSo I've been doing some work on seperating instance relationships and working on cinderclient18:12
jgriffithvishy: has been doing a bunch of work on the compute tear out18:13
jgriffithvishy: updates?18:13
*** nati has quit IRC18:13
jgriffithOr any updates on work other folks are doing/planning?18:14
vishyjgriffith: haven't been working on anything yet :(18:14
DuncanTI've only gotten as far as pulling the code and getting the unit tests working, and that was only today18:15
vishyjgriffith: but I still have plans to do step 3 on the blueprint18:15
jgriffithvishy: :)18:15
jgriffithDuncanT: unit tests should have already been working?18:15
jgriffithDuncanT: did you have an issue running them?18:16
DuncanTjgriffith: Yeah, I was getting them working without virtual env, so I can package cinder for our test infrastructure18:16
DuncanTvirtual env worked out of the box, dpkg based run didn't take long18:16
*** DanD_ has quit IRC18:16
jgriffithDuncanT: ok18:17
DuncanTI'll put some packaging files somewhere accessible once they are working18:17
jgriffithDuncanT: great18:17
jgriffithAlso need volunteers from core to hit mtaylors change in gerrit as well18:17
vishyjgriffith: has anyone tried updating python-cinderclient yet?18:18
jgriffithMine can wait until it hits nova18:18
vishythat should be  a chunk of work18:18
jgriffithvishy: I started working on that about 30 minutes ago18:18
jgriffithand yes, it seems like a good chunk18:18
vishynice, you probably need to pull in all of the changes from novaclient18:18
vishyand then we can reset the repo like we did for cinder18:18
jgriffithvishy: I'm currently putting in fakes for the delete/list/detail tests18:18
jgriffithvishy: Ahh... repo is set18:19
jgriffithvishy: Sorry, I should've mentioned18:19
jgriffithmtaylor apparently had the forethought to pull your version in as the base  :)18:19
jgriffithvishy: Oh, I think I see what you're saying now.18:20
vishyjgriffith: i mean get rid of annoying history :)18:20
jgriffithRather than use the base you had set up pull in novaclient and rip appart18:20
vishyjgriffith: no that is what i did18:20
vishybut there have been a number of fixes/changes to novaclient since i split18:20
jgriffithvishy: I'm with ya... take three or four tries but I catch up18:20
vishywhich it would be nice to get in18:20
mtaylorvishy, jgriffith do you guys want/need to start the cinderclient repo over like we did for cinder?18:21
jgriffithvishy: I planned to work on that next assuming my instance ref's patch doesn't get rejected :)18:22
vishyjgriffith: i'm reviewing it now18:22
jgriffithmtaylor: Yes, but after I makes some changes today and tomorow18:22
vishylooking good so far18:22
* jgriffith sigh of relief18:22
jgriffithmtaylor: BTW sorry about the "false" bug to set up the repo :(18:23
jgriffith#action jgriffith get cinderclient up to speed and ask mtaylor to reset repo when ready18:23
jgriffithanybody else looking for ways/places to help out?18:24
jgriffithJust a reminder we had a goal of having base functionality at F1 I think, which is approaching very quickly18:24
jgriffithok, one more thing from my side no cinder...18:26
MandellAll I can commit to for the next couple weeks is reviews, but I'll commit to one a day.18:26
jgriffithMandell: Reviews are a huge help so that's great18:26
jgriffithSo there was a submission for auto cleanup of orphaned volumes:18:26
jgriffithI wanted to get other volume folks opinion...18:27
jgriffithI'd rather see this manual or configurable via conf file?18:27
DuncanTDefinitely don't like it being manual18:28
DuncanTJust reading through the patch, but I can't see it being something we want turned on18:29
jgriffithDuncanT: You mean automatic?18:29
jgriffithDucanT: Ahh ok18:29
DuncanTPossibly at all18:30
DuncanTIf users want to keep volumes lying around, who are we to argue?18:30
jgriffithDuncanT: yeah, but I can see the desire to go through and do cleanup but I don't like the idea of having it take place on it's own18:30
*** darraghb has joined #openstack-meeting18:30
jgriffithProviders may like that depending on how they do their billing :)18:30
DuncanTPossibly an api to list 'orphan' volumes?18:30
DuncanTI think that can be done purely in the client TBH18:31
jgriffithDuncanT: Actually that's a bette approach than what i had in mind, and yes it can18:31
*** rnirmal_ has joined #openstack-meeting18:31
*** patelna has joined #openstack-meeting18:31
jgriffithDuncanT: The needed functionality is already there, just query via the api and delete if you want18:32
vishyjgriffith: reviewed, have three extra chunks of code that can be deleted, otherwise looks great18:32
*** ryanpetrello has joined #openstack-meeting18:32
jgriffithvishy: awesome THANKS!!!18:32
DuncanTjgriffith: Agreed18:32
vishyautoclean is not good18:33
jgriffithDuncanT: Ok, I'll update my review and put our reasoning in18:33
vishyat least in nova.  I don't mind a deployer doing it externally18:33
*** oubiwann1 has joined #openstack-meeting18:33
jgriffithvishy: I didn't think so... exactly, external is easy enough18:33
vishyif a user creates a volume and doesn't use it that is fine18:33
vishya task to log a notification or something would make sense18:34
jgriffithOk, I'll update my review of it accordingly but that's where I was headed with it18:34
vishybut deleting it seems really bad.18:34
*** hggdh has quit IRC18:34
*** markmcclain has joined #openstack-meeting18:34
jgriffithI thought it seemed really dangerous personally, especially if it's not at least configurable (on/off/interval)18:34
jgriffithbut as DuncanT pointed out it's really not necessary to have in the code at all18:35
*** dhellmann has joined #openstack-meeting18:35
jgriffithJust external via the api can accomplish for somebody that wants it18:35
*** rnirmal has quit IRC18:35
*** rnirmal_ is now known as rnirmal18:35
jgriffithrnirmal: you made it :)18:35
rnirmalwell logged in..but still in another meeting18:36
jgriffithAnybody else have anything they want to bring up?  Questions/Concerns ?18:36
jgriffithIs this meeting worthwhile for you?18:37
jgriffithThe nice thing about no response is I can interpret it any way I like :)18:38
DuncanTIt's good to see progress, and also means there is at most a week between kicks, so yes18:38
*** vladimir3p has joined #openstack-meeting18:39
jgriffithDucnanT: cool... I'll be here no matter what so just want to make sure folks don't want to see something different18:39
jgriffithor "need" something different18:39
jgriffithI do believe the pace will start picking up significantly with each week18:39
jgriffithvladimir3p: Hey there18:40
vladimir3psorry :-)18:40
jgriffithvladimir3p: Any updates on openstack-common?18:40
vladimir3phad couple customers meetings18:40
vladimir3pnope :-)18:40
jgriffithvladimir3p: NP18:40
jgriffithvladimir3p: :)18:41
jgriffithAlright, well I was going to go ahead and wrap up if nobody has anything....18:41
jgriffithvladimir3p: if you want to catch up we can do so offline or you can grab the minutes off of eavesdrop18:41
vladimir3pjgriffith: it will be great to talk offline - might be faster18:42
jgriffithOh... one other thing, I think the etherpad is probably not the place to add anything any longer18:42
jgriffithvladimir3p: sounds good18:42
rnirmallets start opening blueprints and bugs to work on items18:42
jgriffithrnirmal: +118:42
jgriffiththe etherpad is still good reference for some stuff, but anything new anybody wants to grab we start using the formal process18:43
jgriffithOk... well if there's nothing else?18:43
jgriffithAlright, thanks everyone.  Ping me if you think of anything18:44
*** openstack changes topic to "Status and Progress (Meeting topic: keystone-meeting)"18:44
openstackMeeting ended Thu May 10 18:44:24 2012 UTC.  Information about MeetBot at . (v 0.1.4)18:44
openstackMinutes (text):
DuncanTjgriffith:  Just finished reviewing your change :-)18:44
jgriffithHey... two weeks in a row of having internet coverage for the whole meeting18:44
jgriffithDuncanT: Thanks!!18:44
DuncanT1 minor nit, otherwise looks good18:45
jgriffithDuncanT: yeah, I pulled in changes from nova that aren't "needed" :)18:45
jgriffithI'll nuke it and resubmit, THANKS18:46
*** Gordonz has quit IRC18:49
*** jog0 has joined #openstack-meeting18:49
*** Gordonz has joined #openstack-meeting18:49
*** anderstj_ has joined #openstack-meeting18:51
*** joearnol_ has joined #openstack-meeting18:51
*** anderstj has quit IRC18:51
*** novas0x2a|laptop has joined #openstack-meeting18:52
*** joearnold has quit IRC18:52
*** darraghb has quit IRC18:56
*** bcwaldon has joined #openstack-meeting18:57
*** jog0 has quit IRC19:06
*** jakedahn is now known as jakedahn_zz19:07
*** jakedahn_zz is now known as jakedahn19:10
*** hggdh has joined #openstack-meeting19:13
*** jog0 has joined #openstack-meeting19:13
*** patelna__ has joined #openstack-meeting19:32
*** patelna has quit IRC19:35
*** mattray has joined #openstack-meeting19:36
*** jog0 has quit IRC19:42
*** jakedahn is now known as jakedahn_zz19:43
*** danwent has quit IRC19:43
*** danwent has joined #openstack-meeting19:44
*** hggdh has quit IRC19:45
*** littleidea has joined #openstack-meeting19:46
*** patelna__ has quit IRC19:47
*** patelna has joined #openstack-meeting19:48
*** maoy has joined #openstack-meeting19:58
*** n0ano has joined #openstack-meeting19:59
maoyit appears that we are still in keystone-meeting20:01
*** reed has quit IRC20:02
n0anooh great, not that again20:03
openstackMeeting started Thu May 10 20:03:15 2012 UTC.  The chair is n0ano. Information about MeetBot at
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:03
n0anono, we're good for orchestration20:03
*** jakedahn_zz is now known as jakedahn20:04
n0anonow if sriram appears we'll have full quorum20:04
maoymy WIP feature branch is at github20:05
maoythat seems like the way to do it for openstack for now.20:05
*** dhellmann has quit IRC20:05
n0anoI think that follows the BKM (Best Known Method), in fact I think you're the first one to do that.20:06
maoyglad to be the lab rat..20:06
*** dhellmann has joined #openstack-meeting20:06
n0anowe like to call it `bleeding edge` :-)20:07
n0anohave you had anyone look at your feature branch yet?20:07
maoyYes. Mark McLoughlin and Jay Pipes20:08
maoyi'm also in contact with some folks from IBM and NTT20:08
n0anoexcellent, any feedback so far?20:09
maoyyes, some inline comments at github.20:09
maoywill have a much better update next week20:09
n0anosounds good20:09
maoyi'm not entirely sure if I should rebase or merge the new update though..20:09
*** markvoelker has quit IRC20:10
n0anoI would think re-basing would be the way to go, is there a problem.20:10
maoyperhaps i should just use a different branch every time..20:10
maoyand rebase20:10
n0anobranches are very cheap in git, I use them extensively20:11
n0anopretty much, when I doubt I create a new branch20:11
*** troytoman-away is now known as troytoman20:13
maoyabout the blueprint, i'm inclined to update the blueprint inplace than creating a new one20:13
*** ryanpetrello has quit IRC20:14
n0anoworks for me, that should actually create a history which is good20:14
vishyI have some comments about orchestration stuff20:14
vishyesp. regarding maoy's proposed code20:14
maoyi was hoping to hear from you vishy..20:15
*** jdurgin has joined #openstack-meeting20:15
vishymaoy: should i mention now?20:16
n0ano#topic proposed code20:16
*** openstack changes topic to "proposed code"20:16
vishyso first the major concern: we are trying to get rid of all db access in nova-compute20:16
*** ryanpetrello has joined #openstack-meeting20:16
maoyyes please.20:17
maoythat should work when the zookeeper backend is in.20:17
maoywithout database access, i'm assuming there is a place to write persistent state, such as the health monitor, or report capability20:18
vishymaoy: so there are to other things20:20
vishya) if compute isn't hitting the db, I don't think we need distributed state management in compute20:20
vishyb) it is possible that distributed state isn't needed at all.  Some people have suggested that there are lock-free approaches which might save us a lot of extra work20:21
*** jog0 has joined #openstack-meeting20:21
vishythe scheduler could be a a different story20:21
vishybut for individual vm state management i think in memory state machine is probably fine on the compute node20:22
vishyhere is the general principle that I'm going to suggest20:22
vishyuser requests come into api and they are performed by simply making a call to compute and succeding or failing20:23
vishystate is propogated back up from compute to api periodically20:23
vishythe api node doesn't need to make decisions about state because it lets the owning node do it20:23
vishythere are a few special cases which need to be considered but this can be solved in a lock-free way as well.20:24
maoythis should work if the state is local. e.g. the compute node owns the VM20:24
maoybut my concerns are mostly non local state:20:24
vishymaoy: such as?20:24
maoya) volume + vm needs to work together, also network20:25
maoyb) vm migration20:25
vishyi think a) makes sense and so there may be a need for that kind of state management at a higher layer20:25
vishyalthough I'm not totally sure we are doing anything complicated enough there to warrant distributed locking20:26
vishyb) what kind of state is important in this case? and does it need to be managed on multiple nodes?20:26
maoyfor b) which node owns the VM? the source or the target?20:27
vishymaoy: the source until the migration is complete20:29
vishymaoy: the two nodes already need to communicate directly to perform the migration so having a higher level lock arbiter seems like a bit of overkill in this case20:30
vishymaoy: but perhaps there is a complicated case where it would be necessary20:30
maoyvishy: there might be tricky crash cases where it's not clear who owns what..20:33
vishymaoy: I think in general i would prefer if we are doing distributed locking that it does not happen in the compute worker20:33
vishymaoy: i want the compute worker to be as dumb as possible and have access to as little as possible20:33
maoyvishy: regardless of how it's implemented, the task abstraction still holds.20:34
vishymaoy: however it probably needs an internal state machine20:34
vishymaoy: to handle some of the transitions required.20:35
maoyvishy: ok. points taken. but i don't think the locking mechanism i have in mind is more complicated than local locks.20:35
vishymaoy: otherwise i like the idea of tracking actions and steps via something like what you proposed. In fact I tried to make a generalized task system for python here
vishymaoy: before i discovered that celery does essentially the same thing only better :)20:36
maoyvishy: i need to look into celery. does celery allow you to kill tasks and recycle locks/resources?20:37
vishymaoy: not sure I never got into it that deeply20:38
maoyvishy: so even within the compute node, the tracking actions and kill tasks functions are still necessary..20:38
vishymaoy: doesn't look like it has it out of the box:
vishymaoy: I agree, I just don't want it to have to talk to a centralized db/zookeeper if possible20:39
vishymaoy: and I wonder how much of it is already implemented in the drivers20:39
maoyvishy: i see your point. that's one backend change. right? from a centralized db to a in memory one..20:39
vishymaoy: as in xen and libvirt already have to handle state management20:40
vishymaoy: so we may get a lot of it for free20:40
maoyvishy: i saw those i was actually planning to use the state management code as well.20:40
vishymaoy: by just going try: reboot() except: libvirtError rollback()20:40
vishymaoy: true, but I wonder if using the db layer is necessary at all.20:41
vishymaoy: you could use in memory sqlite but that is going to do table locking and nastiness20:42
vishymaoy: so maybe something specifically designed to handle that kind of stuff would be better.20:42
maoyvishy: a in memory hash table is enough. actually that's how i started.20:42
vishymaoy: That seems like a great place to start, do a simple in memory one20:43
vishymaoy: we may find that is all we need.20:43
maoyvishy: but I felt that the information is useful for ops to gain insight of the system in general, so keeping the log in db is not a bad place.20:43
vishymaoy: hmm i guess that is a good point.  There is a review in to store running greenlets, have you seen it?20:44
maoyvishy: the thing is, once the task traverse the node boundry, e.g. from compute to network, you lose the context20:44
maoyvishy: not yet.. link plz..20:44
vishymaoy: so this seems like it is solving a very similar problem20:45
*** vladimir3p has quit IRC20:45
*** joearnol_ has quit IRC20:45
*** anderstj has joined #openstack-meeting20:45
*** joearnold has joined #openstack-meeting20:45
*** anderstj_ has quit IRC20:46
vishymaoy: especially if we add subtasks/logging to the idea20:46
vishymaoy: persistence is also a possibility but I feel like we could add that later if needed.20:46
maoyvishy: ok. will take a look. it's local task tracking or cross node? i can't tell from the title..20:46
maoyvishy: i can't connect the blueprint with the patch title. perhaps i should ping JE and read the code for more details.20:47
vishymaoy: yeah do that20:47
vishymaoy: it is just local20:48
vishymaoy: and it is specific to greenthreads (no further granularity)20:48
maoyvishy: i'd also have a function where the ops can just say, find all running tasks against that VM, kill them if necessary20:48
*** rkukura has quit IRC20:48
vishymaoy: yes i think that is where the patch tries to get20:48
vishymaoy: you should probably sync up with him20:49
maoyvishy: then i though migration might make this tricky so a centralized version is dead simple to get started.20:49
maoyvishy: yeah sure.20:49
maoyvishy: i have some VM+EBS race conditions in my amazon cloud so I'd like to get that right in openstack. :)20:50
vishymaoy: i think we can see how far we get without centralizing.  I agree that we will need it for higher-level orchestration20:50
maoyvishy: but local task tracking is definitely composable with a global/distributed one20:50
vishymaoy: but that could be something that lives above nova / quantum / cinder20:51
*** jgriffith has quit IRC20:51
maoyvishy: that's indeed what's in my mind but i have to start from somewhere.. so nova..20:51
vishymaoy: also check out this one
vishymaoy: it looks like johannes is trying to solve the same problems as you, so you should probably communicate :)20:52
maoyvishy: ok. that means i'm solving the right problems at least. :)20:53
*** dachary has joined #openstack-meeting20:53
maoyvishy: is there more docs on how to get rid of db?20:54
maoyvishy: at compute.20:54
*** troytoman is now known as troytoman-away20:54
*** ttrifonov is now known as ttrifonov_zZzz20:56
maoyvishy: I'm afraid we might have to abuse rabbitmq more to extract state from compute nodes.20:56
*** ttrifonov_zZzz is now known as ttrifonov20:57
n0anocompute nodes are already sending state info to the scheduler, can you ride on top of that?20:57
vishymaoy: i don't know if there are docs yet20:58
vishymaoy: but the idea is to just allow computes to report state about there vms20:58
vishymaoy: and all relevant info will be passed in through the queue20:58
vishymaoy: my initial version was going to make the api nodes listen and just throw data back in a periodic task20:59
vishymaoy: and update the state of the instance on the other end21:00
vishyif we keep the user requested state as a separate field, then we don't run into weird timing collisions21:00
maoyvishy: i'm not sure i follow this. but it seems like the api nodes, other than translating api calls to compute/network apis, it also monitors the task execution status?21:02
vishymaoy: no not task execution status, just vm state21:02
vishymaoy: nova-api is just an easy place to put the receiving end of the call, it could also be a separate worker: nova-instance-state-writer or some such21:03
*** rnirmal has quit IRC21:03
maoyvishy: got you21:04
*** epim_ has joined #openstack-meeting21:04
maoyvishy: so the vm state change in db now happens in n-cpu, but will be rpc-ed to nova-state-writer who does the db ops21:05
*** epim has quit IRC21:05
*** epim_ is now known as epim21:05
vishymaoy: correct21:05
vishymaoy: and the calls from api -> compute will pass in all the relevant info so it doesn't need to read from the db either21:05
vishyi.e. the entire instance object instead of just an id21:06
maoyvishy: great. that makes sense.21:06
maoyvishy: i will take a closer look at the the code in review and see how that fits the task management i have.21:09
maoyvishy: will make the backend plugable to fit both local in memory and distributed case.21:09
maoyvishy: I wish I saw Johannes's patch earlier..21:11
vishymaoy: hard to keep track of this stuff, I know :)21:11
maoyvishy: is there any attempt on utilizing celery by anyone else?21:11
vishymaoy: not that i know of21:11
maoyvishy: ok. so i'll ignore it for now. :)21:13
maoyvishy: where would the compute node health status update go without db?21:13
maoyi know the IBM folks are working on a zookeeper backend for that.21:14
vishymaoy: passed through the queue most likely21:14
maoyis this going to happen in folsom or later release?21:15
vishymaoy: we are going to try and get all db access out in folsom21:16
vishymaoy: but we will see how it goes21:16
maoyvishy: what about which VMs should be running on the node -- used periodically to compare it against libvirt/xenapi21:17
maoyvishy: does that mean the compute node need to maintain a local copy?21:17
vishymaoy: I don't think so, I think the periodic task could be initiated by api/external worker21:17
vishymaoy: it could glob the instances directory periodically or something21:18
vishymaoy: but having a separate data store I don't think would be needed21:18
vishymaoy: alternatively it could keep a list in memory, and make a request out to api/scheduler/nova-db-reader or something and get a list when it starts up21:19
*** hggdh has joined #openstack-meeting21:19
maoyvishy: ok. sounds like a lot of changes. will this happen gradually in trunk or on a feature branch?21:20
vishymaoy: feature branch i think21:20
vishywe are trying to pull staged changes out of trunk21:20
*** ywu has joined #openstack-meeting21:21
maoyvishy: ok. will keep an eye on it. thanks!21:22
maoyvishy: i would imagine there are some tricky cases to get the periodic tasks right on n-cpu. but in general i think making n-cpu dumb is the right direction.21:25
*** ttrifonov is now known as ttrifonov_zZzz21:26
maoyn0ano: i think we are done in the discussion.21:28
maoyvishy: thanks so much for jumping in. :)21:29
*** ayoung has quit IRC21:29
n0anosounds good21:29
*** ayoung has joined #openstack-meeting21:30
n0anois there a resolution that needs tp be documented?21:30
vishymaoy: yw21:31
maoyn0ano: tp?21:33
n0anotp - sorry, don't know the abbreviation21:33
*** ryanpetrello has quit IRC21:34
*** dhellmann has quit IRC21:34
*** gyee has quit IRC21:35
*** dachary has quit IRC21:36
maoyn0ano: oh i think you mean "needs to be documented". right?21:39
maoywe have the meeting log for everything, right?21:39
maoynot sure about resolution..21:39
n0anoyep, if you don't have a succinct summary that is sufficient.21:39
n0anolet's go with the full log and we'll talk again next week21:40
*** openstack changes topic to "Status and Progress (Meeting topic: keystone-meeting)"21:41
openstackMeeting ended Thu May 10 21:41:04 2012 UTC.  Information about MeetBot at . (v 0.1.4)21:41
maoysounds ogod.21:41
openstackMinutes (text):
*** jakedahn is now known as jakedahn_zz21:42
*** n0ano has left #openstack-meeting21:50
*** ttrifonov_zZzz is now known as ttrifonov21:54
*** jakedahn_zz is now known as jakedahn21:55
*** ttrifonov is now known as ttrifonov_zZzz21:55
*** hggdh has quit IRC21:59
*** ayoung has quit IRC22:04
*** milner has quit IRC22:06
*** milner has joined #openstack-meeting22:07
*** jog0_ has joined #openstack-meeting22:09
*** edygarcia has quit IRC22:11
*** milner has quit IRC22:11
*** sleepson- has joined #openstack-meeting22:11
*** jog0 has quit IRC22:12
*** LinuxJedi has quit IRC22:12
*** sleepsonthefloor has quit IRC22:12
*** jog0_ is now known as jog022:12
*** LinuxJedi has joined #openstack-meeting22:12
*** maoy has quit IRC22:17
*** jamespage_ has joined #openstack-meeting22:22
*** jamespage_ has quit IRC22:27
*** Gordonz has quit IRC22:27
*** dprince has quit IRC22:27
*** shang has joined #openstack-meeting22:28
*** lloydde_ has joined #openstack-meeting22:29
*** lloydde has quit IRC22:33
*** oubiwann1 has quit IRC22:35
*** blamar has quit IRC22:38
*** patelna has quit IRC22:41
*** mattray has quit IRC22:46
*** Ravikumar_hp has quit IRC22:49
*** jakedahn is now known as jakedahn_zz22:49
*** novas0x2a|lapto1 has joined #openstack-meeting22:52
*** novas0x2a|laptop has quit IRC22:52
*** markmcclain has quit IRC22:53
*** mikal has quit IRC22:53
*** jakedahn_zz is now known as jakedahn23:00
*** edygarcia has joined #openstack-meeting23:01
*** milner has joined #openstack-meeting23:03
*** dtroyer is now known as dtroyer_zzz23:09
*** lloydde_ has quit IRC23:09
*** lloydde has joined #openstack-meeting23:10
*** dhellmann has joined #openstack-meeting23:12
*** lloydde has quit IRC23:14
*** ryanpetrello has joined #openstack-meeting23:26
*** mikal has joined #openstack-meeting23:28
*** lloydde has joined #openstack-meeting23:34
*** pvo-away is now known as pvo23:34
*** dhellmann has quit IRC23:38
*** ryanpetrello has quit IRC23:39
*** pvo is now known as I23:40
*** ryanpetrello has joined #openstack-meeting23:40
*** I is now known as pvo23:40
*** ryanpetrello has quit IRC23:41
*** anderstj has quit IRC23:43
*** ryanpetrello has joined #openstack-meeting23:47
*** ryanpetrello has quit IRC23:51
*** ryanpetrello has joined #openstack-meeting23:51
*** lloydde has quit IRC23:54
*** lloydde has joined #openstack-meeting23:55
*** ryanpetrello has quit IRC23:55
*** ryanpetrello has joined #openstack-meeting23:58

Generated by 2.14.0 by Marius Gedminas - find it at!