*** Gen_ has joined #openstack-monasca | 00:01 | |
*** Gen has quit IRC | 00:05 | |
*** ddieterly[away] is now known as ddieterly | 00:07 | |
*** ddieterly is now known as ddieterly[away] | 00:09 | |
*** slogan has quit IRC | 00:14 | |
*** ddieterly[away] is now known as ddieterly | 00:14 | |
*** craigbr has joined #openstack-monasca | 00:39 | |
*** craigbr has quit IRC | 00:42 | |
*** ljxiash has quit IRC | 00:46 | |
*** bobh has joined #openstack-monasca | 00:52 | |
*** bobh has quit IRC | 00:57 | |
*** ybathia has joined #openstack-monasca | 00:59 | |
*** ljxiash has joined #openstack-monasca | 01:06 | |
*** ybathia has quit IRC | 01:08 | |
*** bobh has joined #openstack-monasca | 01:13 | |
*** ddieterly has quit IRC | 01:18 | |
*** ljxiash has quit IRC | 01:20 | |
*** ljxiash has joined #openstack-monasca | 01:23 | |
*** rohit_ has quit IRC | 01:37 | |
*** ddieterly has joined #openstack-monasca | 01:50 | |
*** ddieterly is now known as ddieterly[away] | 01:51 | |
*** ducttape_ has joined #openstack-monasca | 01:59 | |
*** ducttape_ has quit IRC | 02:09 | |
*** bobh has quit IRC | 02:12 | |
*** ddieterly[away] is now known as ddieterly | 02:25 | |
*** ddieterly is now known as ddieterly[away] | 02:26 | |
*** ljxiash has quit IRC | 02:27 | |
*** ljxiash has joined #openstack-monasca | 02:32 | |
*** ljxiash has quit IRC | 02:35 | |
*** ljxiash has joined #openstack-monasca | 02:37 | |
*** bobh has joined #openstack-monasca | 02:46 | |
*** ducttape_ has joined #openstack-monasca | 02:49 | |
*** Gen_ has quit IRC | 02:51 | |
*** kse has quit IRC | 02:56 | |
*** kse has joined #openstack-monasca | 02:57 | |
*** ljxiash has quit IRC | 03:01 | |
*** ljxiash has joined #openstack-monasca | 03:02 | |
*** ljxiash has joined #openstack-monasca | 03:08 | |
*** ljxiash has quit IRC | 03:10 | |
*** ljxiash has joined #openstack-monasca | 03:10 | |
*** ducttape_ has quit IRC | 03:24 | |
*** ddieterly[away] has quit IRC | 03:27 | |
*** bobh has quit IRC | 03:32 | |
*** ljxiash has quit IRC | 04:00 | |
*** ekarlso has quit IRC | 04:11 | |
*** ljxiash has joined #openstack-monasca | 04:12 | |
*** hosanai has quit IRC | 04:13 | |
*** hosanai has joined #openstack-monasca | 04:14 | |
*** ekarlso has joined #openstack-monasca | 04:25 | |
*** ljxiash has quit IRC | 04:35 | |
*** ljxiash has joined #openstack-monasca | 05:40 | |
*** ericksonsantos has quit IRC | 06:13 | |
*** nadya has joined #openstack-monasca | 06:26 | |
*** ljxiash has quit IRC | 06:40 | |
*** ljxiash has joined #openstack-monasca | 06:43 | |
*** nadya has quit IRC | 07:19 | |
*** ljxiash has quit IRC | 07:36 | |
*** ljxiash has joined #openstack-monasca | 07:40 | |
*** ljxiash_ has joined #openstack-monasca | 08:18 | |
*** ljxiash has quit IRC | 08:18 | |
*** ljxiash_ has quit IRC | 08:29 | |
*** ljxiash has joined #openstack-monasca | 08:30 | |
openstackgerrit | Witold Bedyk proposed openstack/monasca-agent: Migrate from MySQLDB to pymysql https://review.openstack.org/302660 | 08:59 |
---|---|---|
*** kei_yama has quit IRC | 09:09 | |
*** ljxiash has quit IRC | 09:12 | |
*** ljxiash has joined #openstack-monasca | 09:12 | |
*** kse has quit IRC | 09:30 | |
*** nadya has joined #openstack-monasca | 09:50 | |
*** hosanai has quit IRC | 10:18 | |
*** ljxiash has quit IRC | 10:29 | |
*** nadya has quit IRC | 11:02 | |
*** ddieterly has joined #openstack-monasca | 11:06 | |
*** nadya has joined #openstack-monasca | 11:24 | |
*** ddieterly is now known as ddieterly[away] | 11:25 | |
*** ddieterly[away] is now known as ddieterly | 11:29 | |
*** bobh has joined #openstack-monasca | 11:33 | |
*** ddieterly is now known as ddieterly[away] | 11:38 | |
*** ducttape_ has joined #openstack-monasca | 11:45 | |
*** ddieterly[away] is now known as ddieterly | 12:18 | |
*** ddieterly has quit IRC | 12:18 | |
*** ljxiash has joined #openstack-monasca | 12:22 | |
*** ducttape_ has quit IRC | 12:24 | |
*** bobh has quit IRC | 12:25 | |
*** nadya has quit IRC | 12:26 | |
*** iurygregory has joined #openstack-monasca | 12:37 | |
*** ddieterly has joined #openstack-monasca | 12:46 | |
*** ducttape_ has joined #openstack-monasca | 12:56 | |
*** ducttape_ has quit IRC | 13:01 | |
*** ducttape_ has joined #openstack-monasca | 13:02 | |
*** ducttape_ has quit IRC | 13:02 | |
*** ducttape_ has joined #openstack-monasca | 13:06 | |
*** bobh has joined #openstack-monasca | 13:19 | |
*** rhochmuth has joined #openstack-monasca | 13:27 | |
*** rbak has joined #openstack-monasca | 13:37 | |
*** craigbr has joined #openstack-monasca | 13:39 | |
*** vishwanathj has joined #openstack-monasca | 13:58 | |
*** 14WAATBIX has joined #openstack-monasca | 14:16 | |
openstackgerrit | Bradley Klein proposed openstack/monasca-agent: Add plugin for gathering ovs virtual router statistics https://review.openstack.org/306621 | 14:43 |
*** ddieterly is now known as ddieterly[away] | 14:45 | |
*** ddieterly[away] is now known as ddieterly | 14:46 | |
*** slogan has joined #openstack-monasca | 14:53 | |
*** bklei has joined #openstack-monasca | 15:00 | |
*** bobh has quit IRC | 15:02 | |
*** dschroeder has joined #openstack-monasca | 15:22 | |
rbak | 14WAATBIX: Do you want to talk immediately after the monasca meeting, or is there a time that works better for you? | 15:35 |
14WAATBIX | I can do immediately after, but I don't think jkeen will be in until later | 15:36 |
*** 14WAATBIX has quit IRC | 15:38 | |
*** rbrndt has joined #openstack-monasca | 15:38 | |
rbak | Any idea what time? I can sent out a meeting invite for later today. | 15:38 |
rbrndt | I'd guess about 10, 10:30 mountain time | 15:39 |
rbak | How's 11 mountain time work for you then? | 15:40 |
rbrndt | We've got a monasca team meeting at that time | 15:40 |
rbrndt | personally i've got 1-2 pm open | 15:40 |
rbrndt | oops, sorry wrong day | 15:41 |
rbrndt | yeah 11 works fine for me | 15:41 |
rbak | Alright, I'll send out an invite and we'll see who shows up | 15:41 |
rbrndt | sounds good | 15:42 |
rbak | I think we can just talk here, unless you prefer a bridge? | 15:42 |
rbrndt | I can do IRC | 15:42 |
*** bobh has joined #openstack-monasca | 15:43 | |
*** iurygregory has quit IRC | 15:45 | |
*** slogan has quit IRC | 15:49 | |
*** ljxiash has quit IRC | 15:50 | |
*** iurygregory has joined #openstack-monasca | 15:55 | |
*** rhochmuth has left #openstack-monasca | 16:02 | |
*** ddieterly is now known as ddieterly[away] | 16:04 | |
*** ddieterly[away] is now known as ddieterly | 16:11 | |
*** ddieterly is now known as ddieterly[away] | 16:12 | |
*** nadya has joined #openstack-monasca | 16:17 | |
*** ddieterly[away] is now known as ddieterly | 16:33 | |
*** bklei has quit IRC | 16:48 | |
*** ljxiash has joined #openstack-monasca | 16:50 | |
*** ddieterly is now known as ddieterly[away] | 16:52 | |
*** ljxiash has quit IRC | 16:55 | |
*** ddieterly[away] is now known as ddieterly | 16:58 | |
rbak | rbrndt: you there? | 17:01 |
rbrndt | yup | 17:01 |
rbrndt | getting jkeen online | 17:01 |
rbak | thanks | 17:01 |
*** jkeen has joined #openstack-monasca | 17:02 | |
*** mhoppal has joined #openstack-monasca | 17:02 | |
rbrndt | Ok, we all here now? | 17:03 |
jkeen | I'm here | 17:03 |
mhoppal | here as well | 17:03 |
rbak | Awesome. thanks for taking the time to talk about this patch | 17:04 |
rbak | From what I've gathered, the concern is that with this patch, when the pool restarts it loses the data of any running checks. | 17:04 |
rbrndt | So, you had a good way of describing the problem in the weekly meeting, rbak | 17:04 |
rbak | Now I need to remember how I put it earlier. | 17:05 |
rbak | Basically the pool restart is triggered by a check taking to long. | 17:05 |
rbak | In the best case the check eventually returns, and no data is lost. | 17:06 |
rbak | But in the case we've hit repeatedly the stuck check never returns, and so the thread pool hangs forever on the join | 17:06 |
rbak | My patch was intended to help the second case, and reduces the data lose to a minimum | 17:07 |
rbak | But from your perspective, you're addressing the first case which had no data loss and saying that my patch makes things worse. | 17:07 |
rbak | Does that make sense so far? | 17:08 |
rbrndt | I think we're almost there. | 17:08 |
jkeen | Yes | 17:08 |
rbrndt | The issue I was wondering about is actually in the second case | 17:08 |
mhoppal | make sense to me | 17:08 |
rbrndt | when we do lose data, how much and which data is lost? | 17:08 |
rbrndt | I think we were looking at it, and it sounds like we could lose the whole set of instances for a check, if one of them fails | 17:09 |
jkeen | Given the way the current thread pool works the lost data is going to be indeterminate. It'll kill the pool at some point long past the point where a check got stuck. | 17:09 |
rbak | Currently, the thread pool hangs and takes the entire agent with it, so all data for the entire agent is lost until the agent is manually restarted. | 17:09 |
jkeen | We've been running Monasca at scale for several months now and we've never seen an agent hang. Do you know what leads to the hang? | 17:10 |
rbak | Not really | 17:10 |
rbak | But we've seen it caused by both the nagios and http-check plugins | 17:10 |
jkeen | Or, rather, we've never seen it hang on an http check. We've seen it hang communicating to the API but think we've fixed that. | 17:10 |
rbak | Yeah, that's a different issue | 17:11 |
jkeen | Ok, I don't think we run nagios so that could expalin it. | 17:11 |
rbak | If you don't see things hang, what causes pool restarts for you? | 17:11 |
jkeen | There are a couple problems with the current implementation, which aren't your fault it's just how it was written originally, that make me concerned. | 17:12 |
jkeen | Far as we know we've never seen a pool restart. | 17:12 |
rbak | jkeen: what are the current problems? | 17:13 |
jkeen | My main problem is the way it attempts to get data back from the pool. It gets an instance, checks to see if there is any available data, and then places the instance in the pool. | 17:13 |
jkeen | This applies to the thread pool and the new process pool code. | 17:13 |
jkeen | It looks like it can never get all the data for a give check, almost like it's expecting checks to hang for a time. | 17:14 |
rbak | I'm not sure I follow that bit. | 17:14 |
jkeen | If the checks don't complete within a given time frame, 180 seconds by default, it kills the pool and we lose the data. | 17:14 |
rbak | True | 17:15 |
mhoppal | should we never see it hang then jkeen by that logic? | 17:15 |
jkeen | In the check function it runs self._process_results() and then does self.pool.apply_async() | 17:15 |
jkeen | I don't see it run self._process_results and wait for the data anywhere. | 17:16 |
rbak | But that's processing all results that have come in for the pool, not necessarily the instance that it's about to start | 17:16 |
jkeen | So it looks like if you run N checks you're going to get N-m the first time around and get the remaining unfinished checks the next collection cycle. | 17:16 |
jkeen | Right, but since it's always checking first what happens when you reach the last item in the pool? It checks for data before it ever tries to run the job. | 17:17 |
rbak | True, but even if you moved that statement afterwards there's no guarantee the check will be done. | 17:18 |
rbak | Also I think last is misleading here since the asynchronous nature means they're all running at once. | 17:18 |
*** rhochmuth has joined #openstack-monasca | 17:18 | |
rbak | The loop just sticks them all on the stack to run | 17:18 |
jkeen | Yes, and that's my main problem with how this works. What I'd rather see is that we apply the instances to the pool and then read all the data. There are few enough checks we should be able to make them robust enough to timeout and guarnatee a return from the pool. | 17:19 |
rbak | That doesn't necessarily work though | 17:20 |
rbak | You have to deal with the results asynchronously as well | 17:20 |
mhoppal | im confused how it gets into a state where it hangs though with the current implementation as we run clean each run which stops and starts the pool if the job has been running for a configured time | 17:20 |
jkeen | rbak, they can't all be running at once because we're still sticking them in the pool one at a time in a higher level loop. They might eventually all be running at once but even then we're not waiting for any data. We return as soon as the result queue is empty. | 17:21 |
rbak | mhoppal: Because the pool stop doesn't work. It waits for all running checks to return before stopping, and that's not necessarily going to happen. | 17:21 |
jkeen | You don't have to deal with the results asynchronously unless you want to. You can use a blocking map with a timeout. That has it's own issues but we'll know what we're dropping at that point. | 17:23 |
rbak | But that doesn't work for the nagios checks | 17:23 |
rbak | If nothing else | 17:23 |
rbak | I could have a check that takes 5 minutes to run, and another that runs every minute. If I block on waiting for data that limits everything to the rate of the longest running check. | 17:24 |
jkeen | rbak, why doesn't that work with nagiois? I don't have any experience with those checks? | 17:24 |
rbak | I just gave you an example | 17:25 |
jkeen | You're having checks that run well outside the 30 second collection period? | 17:25 |
rbak | Basically the checks could run at different rates | 17:25 |
rbak | And yes, we have checks that only run once an hour, but take several minutes. | 17:25 |
rbak | That's an extreme example though | 17:26 |
rbrndt | hmm | 17:26 |
*** nadya has quit IRC | 17:26 | |
rbak | But I think we're getting off track. I don't really see how this impacts the thread pool restarts | 17:27 |
rbak | My patch boils down to this. The current implementation assumes checks always return. This isn't always the case. So we have to handle the case where it's not true. | 17:29 |
rbak | It's impossible to tell the difference between a long running check and one that will never return, so at some point we have to just cut everything off and restart. | 17:30 |
rbrndt | I think its something of a different use case to handle checks that take that long to return | 17:30 |
rbrndt | jkeen mhoppal and roland are conferring for a moment | 17:30 |
rbak | Worth noting, this only loses data on checks that are still running. The results queue would be unaffected and that data would be collected later. | 17:31 |
rbak | rbrndt: thanks for letting me know | 17:31 |
jkeen | rbak, the problem here is we do want that behaviour for a given collection cycle. I was planning to put a patch up that made the collection of these paralellized pieces more reliable but it sounds like it would break your usecase entirely. | 17:34 |
jkeen | Is it only the nagios checks that are the long running ones or are there http checks that take a while to return? | 17:34 |
rbak | As far as I know it's just the nagios checks | 17:35 |
rbak | Everything else returns fairly quickly | 17:35 |
jkeen | If you want to make a new superclass for the nagios checks that implements this new behaviour I'd be able to make our http and tcp checks more reliable without effecting your long running checks. | 17:35 |
jkeen | Is that something you can do? | 17:36 |
rbak | Probably not until after the summit, but sure | 17:36 |
rbak | But this still won't address the pool restarts | 17:36 |
jkeen | If we do this though we'll have separate pools for the nagios and the other parallelized checks. I can fix the problem I see for the other checks but that's not a viable solution for the nagios checks. | 17:37 |
jkeen | There are other options there though. | 17:38 |
rbak | I don't follow | 17:38 |
jkeen | I'm suggesting that you make a new superclass specifically for nagios checks that implements the process pool you currently have up so that the nagios checks can do long running operations independent of the collection interval. | 17:39 |
rbak | But the process pool has nothing to do with long running checks | 17:39 |
rbak | They're separate issues | 17:40 |
rbak | Let's ignore nagios checks for the moment, and say an http check hangs, which we've seen happen (not for a while, but we're not sure if that means the bug is fixed) | 17:41 |
jkeen | I don't see that. The current thread pool, and the process pool patch, result in unreliable collection. I want to make it reliable but that means that you can't have a check that takes longer than the collection cycle. | 17:41 |
rbak | If an http check hangs, how does your new collection mechanism fix it? | 17:41 |
*** ybathia has joined #openstack-monasca | 17:42 | |
jkeen | For http checks my current plan is to replace the _process_results function with a process.map call that will timeout if the checks take too long along with modifying the http checks so that they're reliable. | 17:42 |
jkeen | Since they're running as subprocesses in the map we can interrupt them and force a return easily enough. | 17:42 |
rbak | But I still don't think you can interrupt a single process, even with a map. | 17:43 |
rbak | And I don't see a timeout option. | 17:44 |
jkeen | Having the subprocess interrupt itself and return isn't a problem. I was doing that in another part of Monasca before I found a cleaner way for that particular case. | 17:45 |
openstackgerrit | Michael Hoppal proposed openstack/monasca-api: Add periodic interval field to notification method https://review.openstack.org/308502 | 17:45 |
jkeen | If there isn't currently a timeout option we'd add one. | 17:46 |
rbak | You mentioned process.map, are you talking about the multiprocessing module? | 17:46 |
rbak | Or are you still trying to use the thread pool? | 17:47 |
jkeen | Yes, we'd use the multiprocess module and get rid of the thread pool library. It just looks like a problem waiting to happen. | 17:47 |
rbak | At least we agree there | 17:48 |
rbak | I'm not sure how you would add a timeout option to the multiprocessing module though | 17:48 |
jkeen | There's several ways to timeout a map operation. You can use the results object it returns to timeout but in that case you get no data. You can use a callback function and time out the parent process. You can use an imap operation and timeout there if a check takes too long but you'll still get a partial set of data back. | 17:50 |
jkeen | What I'd look at first is using a signal handler in the subprocess to interupt it and force the return of a failure. That will let us get a result back so we can identify the failing check rather than having a subset of the http checks go undetermined. | 17:50 |
rbak | Alright, you seem to have some idea of how that would work. | 17:51 |
rbak | I've never tried it. | 17:52 |
rbak | Ok, so that's fine for everything except the nagios module. | 17:52 |
rbak | Any timeline on this patch your proposing? | 17:52 |
jkeen | Well, like I mentioned earlier I don't think this idea works for the nagios checks. That's why I'd like to see a new parent class for nagios with that contains your current patch set. | 17:55 |
*** mhoppal_ has joined #openstack-monasca | 17:55 | |
*** craigbr has quit IRC | 17:55 | |
jkeen | Then when I can find some time, hopefully soon, I can do the proposed patch to the http and tcp checks. | 17:55 |
*** ddieterly is now known as ddieterly[away] | 17:57 | |
*** mhoppal has quit IRC | 17:57 | |
rbak | jkeen: Is there any reason not to just merge this patch and work from there? It would fix our immediate issue, you never hit restarts so you shouldn't have any problems with data lose. It already rips out the thread pool and reformats the checks for use in a process pool. | 17:59 |
rbak | I'm happy to separate out nagios in the long run, but this is a pressing problem for us. | 18:00 |
rbrndt | except we did see data loss in our testing | 18:00 |
rbak | I thought you said you never saw restarts? | 18:00 |
rbak | What was causing the data loss? | 18:00 |
rbrndt | Didn't find the root cause as of yet | 18:01 |
*** ybathia has quit IRC | 18:01 | |
jkeen | rbak, we never saw data loss with the thread pool but we have seen data loss with the multi processing modifications. | 18:02 |
rbak | Out of curiosity, how do you know when there's data lost? | 18:02 |
rbrndt | In my test, I found an error in the collector log and saw missing metrics | 18:02 |
jkeen | For us it was on an http check since I don't think we use any nagios checks at the moment. | 18:03 |
rbrndt | yeah, it was http | 18:03 |
rbak | Looks like that bugs still around then | 18:03 |
rbak | Alright, I'll apply this to just nagios checks, but let me know if you're not going to get around to your patch soon. | 18:04 |
jkeen | Ok, thanks. | 18:05 |
*** ybathia has joined #openstack-monasca | 18:20 | |
*** mhoppal_ has quit IRC | 18:23 | |
*** craigbr has joined #openstack-monasca | 18:31 | |
*** vishwanathj has quit IRC | 18:50 | |
*** vishwanathj has joined #openstack-monasca | 18:50 | |
*** ljxiash has joined #openstack-monasca | 18:52 | |
*** ddieterly[away] is now known as ddieterly | 18:54 | |
*** ljxiash has quit IRC | 18:56 | |
*** ducttape_ has quit IRC | 19:20 | |
*** ducttape_ has joined #openstack-monasca | 19:27 | |
*** ybathia has quit IRC | 19:40 | |
openstackgerrit | Ryan Brandt proposed openstack/monasca-api: Fix metric-list limits https://review.openstack.org/307963 | 19:43 |
*** ducttape_ has quit IRC | 19:47 | |
*** ducttape_ has joined #openstack-monasca | 19:56 | |
*** ddieterly is now known as ddieterly[away] | 20:01 | |
*** ddieterly[away] is now known as ddieterly | 20:04 | |
*** rbak has quit IRC | 20:04 | |
*** ybathia has joined #openstack-monasca | 20:31 | |
*** rbak has joined #openstack-monasca | 20:50 | |
*** ljxiash has joined #openstack-monasca | 20:53 | |
*** ljxiash has quit IRC | 20:58 | |
*** ybathia has quit IRC | 20:59 | |
*** ybathia has joined #openstack-monasca | 21:10 | |
*** ybathia has quit IRC | 21:12 | |
*** ybathia has joined #openstack-monasca | 21:12 | |
openstackgerrit | Michael Hoppal proposed openstack/monasca-agent: Add upper-constraints to our tox file https://review.openstack.org/308591 | 21:13 |
openstackgerrit | David Schroeder proposed openstack/monasca-agent: Refresh of Agent plugin documentation https://review.openstack.org/308592 | 21:17 |
openstackgerrit | Michael Hoppal proposed openstack/monasca-agent: Add upper-constraints to our tox file https://review.openstack.org/308591 | 21:37 |
*** slogan has joined #openstack-monasca | 21:37 | |
*** ddieterly is now known as ddieterly[away] | 21:37 | |
*** ddieterly[away] is now known as ddieterly | 21:38 | |
slogan | rhochmuth: FYI https://github.com/openstack/broadview-ui | 21:44 |
slogan | I think that's it for the projects | 21:44 |
slogan | I can rest (a bit) | 21:45 |
slogan | :-) | 21:45 |
rhochmuth | cool | 21:47 |
rhochmuth | i still haven't installed into devstack | 21:47 |
slogan | rbak: I documented grafana, not sure if I shared that - until (and if) you decide to externalize some docs, maybe it will be useful to someone: https://github.com/openstack/broadview-collector/blob/master/doc/microburst_simulation.md | 21:47 |
rhochmuth | i'm bascially jsut trying to cram al this work in | 21:47 |
slogan | devstack? | 21:47 |
rhochmuth | yeah | 21:48 |
slogan | my experience (other than the issues my patch addressed) is monasca and devstack work fine | 21:48 |
slogan | I do do vagrant - it's a bit too resource heavy | 21:48 |
slogan | what remains to be done? | 21:48 |
rbak | slogan: I looked at those the other day and they looked good. I entirely forgot about putting out docs myself, but I'll get to that. | 21:49 |
slogan | s/do do/don't do/ | 21:49 |
slogan | rbak: yup - I noted there the issue with keystone and the workaround | 21:50 |
slogan | assuming that is still a problem | 21:50 |
rbak | I'm not really sure what that problem is | 21:50 |
rbak | We've been running this in production for a while now with no problems | 21:51 |
slogan | nod | 21:51 |
slogan | I never dug into it, the work around was reasonable | 21:51 |
rbak | Do you have any more information on that? | 21:51 |
rbak | On the problem with keystone auth that is. | 21:51 |
slogan | nothing, no | 21:51 |
rbak | Try it again when you get the chance. There's been some changes so maybe it's fixed. | 21:52 |
rbak | If not just let me know what sort of error you're seeing and I'll take a look | 21:52 |
slogan | I'll try today too, unless I get diverted | 21:52 |
slogan | the error was basically I think being unable to test the connection successfully | 21:53 |
openstackgerrit | Michael Hoppal proposed openstack/monasca-agent: Change tox file https://review.openstack.org/308591 | 21:53 |
slogan | so I generated a token, then it worked | 21:53 |
slogan | I should be able to give it another try, I'll do it now in fact | 21:53 |
slogan | also, while I have your ears, I patched devstack plugin in a very simple way to get around some issues like mkdir failing because a directory already exists, and adduser failing because a user like mon-api exists. I am one of probably many users who do ./stack.sh, ./unstack.sh without a clean in the middle - I'm not allowed to contribute code how is the best way to get this to someone who can? | 21:56 |
openstackgerrit | Michael Hoppal proposed openstack/monasca-agent: Change tox file https://review.openstack.org/308591 | 21:57 |
openstackgerrit | Michael Hoppal proposed openstack/monasca-notification: Change tox file https://review.openstack.org/308644 | 21:57 |
openstackgerrit | Michael Hoppal proposed openstack/monasca-persister: Change tox file https://review.openstack.org/308645 | 21:57 |
openstackgerrit | Michael Hoppal proposed openstack/python-monascaclient: Change tox file https://review.openstack.org/308646 | 21:57 |
openstackgerrit | Michael Hoppal proposed openstack/monasca-api: Change tox file https://review.openstack.org/308647 | 21:57 |
*** ddieterly is now known as ddieterly[away] | 22:02 | |
*** ybathia has quit IRC | 22:08 | |
*** bobh has quit IRC | 22:21 | |
*** ddieterly[away] is now known as ddieterly | 22:25 | |
*** ybathia has joined #openstack-monasca | 22:40 | |
*** ducttape_ has quit IRC | 22:40 | |
*** ducttape_ has joined #openstack-monasca | 22:41 | |
rhochmuth | slogan: If you want to send me the fixes i can try and get them in | 22:41 |
*** Gen has joined #openstack-monasca | 22:43 | |
*** ddieterly has quit IRC | 22:43 | |
*** rhochmuth has quit IRC | 22:45 | |
*** jkeen has quit IRC | 22:49 | |
openstackgerrit | Michael Hoppal proposed openstack/monasca-agent: Change tox file https://review.openstack.org/308591 | 22:51 |
*** krotscheck is now known as krotscheck_dcm | 22:57 | |
openstackgerrit | Michael Hoppal proposed openstack/monasca-agent: Change tox file https://review.openstack.org/308591 | 23:02 |
*** rbrndt has quit IRC | 23:07 | |
*** ddieterly has joined #openstack-monasca | 23:15 | |
*** dschroeder has quit IRC | 23:22 | |
*** ducttape_ has quit IRC | 23:22 | |
*** ddieterly is now known as ddieterly[away] | 23:23 | |
*** bobh has joined #openstack-monasca | 23:32 | |
*** kse has joined #openstack-monasca | 23:32 | |
*** kei_yama has joined #openstack-monasca | 23:36 | |
*** bobh has quit IRC | 23:56 |
Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!