-
Notifications
You must be signed in to change notification settings - Fork 9
task analysis seems stuck #89
Comments
Hello @valeriocos, I don't see any reason for being stuck in the plain text extractor. Do you have any other stack trace? As well, even if the metric is frozen, the UI should still showing elements like the projects and the tasks. Maybe it has something to do with MongoDB, as the metric needs to store information there, and the UI retrieves the data regarding projects from MongoDB. So, maybe MongoDB is down? |
maybe, I'll check later, thanks |
I re-did the process, it doesn't stop at the plain text extractor (I will edit the issue title), however the process doesn't finish (at least according to the status task). These are the steps I did:
The last The containers logs are here (except kibiter): The project perceval appears as If I click on the button @creat89 could you try replicate the steps above or do you have a workaround to suggest? thanks When trying to stop the docker-compose, I got this:
I stopped the containers like this then:
I did The worker's heartbeat is updated too Nevertheless, the These are the logs of the oss-db:
When trying to stop the docker-compose, I got these errors:
Note that Kibiter is currently not used, since the dashboard importer and prosoul are deactivated |
I performed the same steps listed at: #89 (comment) on https://github.com/chaoss/grimoirelab selecting only docker metrics. The analysis stopped at 23% percent, and when trying to list the projects, I end up with the same screenshots at: #89 (comment)
|
I performed the same steps listed at: #89 (comment) on https://gitlab.com/rdgawas/docker-jmeter selecting only docker metrics. The idea is to check whether the error may be related to github fetching processes. Also in this case the analysis is blocked
As commented by @creat89 at: #89 (comment), it is possible that |
Hello @valeriocos, sorry for not being replying as fast as expected, but I'm on my holidays. I'll try to check that remotely. However, one of the stack trace seems to be an issue with a metric made by @blueoly, I think. Checking the stack traces, I'm seeing that the http request comes from either the oss_app container or the api_server. Which, I don't know why we would have two different containers having the same type of problem. Still, I'm guessing it has something to do with Mongo, but I don't know if the api_server make requests or not to Mongo. For the moment, I don't have a idea of how to work around the issues. |
Thank you for answering @creat89 , sorry I didn't know you were on holidays, enjoy :) @md2manoppello, @tdegueul @MarcioMateus @mhow2 any idea? |
Hello @valeriocossorr for the late reply. I just returned from holidays. Regarding to the error while stopping the containers, I already have similar errors (don't know if with these containers or others). Usually, when I see messages like that, I do a Regarding to the stuck analysis task, I think that it happened some times to us. Some times it was due to being reached the limit of requests for the GitHub api. But it should restart again in less then one hour (unless it is implemented a back-off algorithm that grew too much...). I remember that in our case we let it running during the night and the task eventually restarted. |
Thank you for answering @MarcioMateus :) Then I did a docker-compose down and got the error reported at #86.
Finally I did a
|
@MarcioMateus can you tell me the specific where you are running your crossminer instance (ram, disk storage, etc.)? I was thinking that all this problem may be related to a limit on my machine (16GB, Intel(R) Core(TM) i7-6500U CPU @ 2.50GHz) because I'm not able to see a task analysed there. |
@valeriocos, my machine is similar to yours. Actually, I usually set the max RAM to 10-12 GB on my Docker engine. Again, as I use Mac OS, some behaviours may be different. But thinking better on the issue, I rarely run all the containers in my machine as it causes a huge strain on my PC and I don't need all of them for our use case. When I need to perform more complete tests (with all the containers), I use our server. You are probably correct. That resources may not be enough to run the whole CROSSMINER platform. |
Thank you for the info @MarcioMateus . I'm now trying to analyse the repo https://github.com/chaoss/grimoirelab with the minimal conf below (I also blocked the population of oss-db with the dumps). Let's see tomorrow :)
|
The analysis reached 69% but then it's stopped waiting for the token to refresh:
I'll restart the docker-compose and see |
Hi @valeriocos, Docker engine for Mac OS comes already with an interface for configure these values But may also exist CLI commands to configure these values on Linux, I don't know. I think that the command you identified only defines limits for a specific container. |
Thank you @MarcioMateus for the info. I was able to get the repo https://github.com/chaoss/grimoirelab analyzed (I have still to check the data in Kibana). The steps I followed are below (I guess I'll add something to the scava-deployment readme):
Now I'm repeating the analysis with https://github.com/elastic/elasticsearch, and see if it works Yesterday the execution probably freezed since I added a new analysis when the one about grimoirelab was still ongoing. Thus, this morning I deleted the oss-db and oss-app containers and did a I would keep this issue open until I got 3-4 repos analyzed. |
The solution seems to work, it can be summarized with:
|
Hi @valeriocos . That is a good summary. Let me just add some comments. I think I never noticed problems with having multiple analysis task in a queue, but I accept the it may consume more resources. When a task is stuck I perform similar steps, however I usually don't delete the task. I just stop the execution of the task and then start it again and after some seconds the worker starts analysing it. |
Thank you @MarcioMateus for the info. I'm going to submit a PR to update the scava-deployment readme (taking into account also your feedback) |
I guess it should be interesting and helpful to detected which metrics use a lot of ram and document them. In order to set a minimum of ram that should be use if all the metrics want to be used. |
I'm trying to analyse a the repo https://github.com/chaoss/grimoirelab-perceval. I added a new github token and defined a task which included some metrics to analyse issue tracker. Since this morning the oss-app seems stucked on
org.eclipse.scava.metricprovider.trans.plaintextprocessing.PlainTextProcessingTransMetricProvider
.I attach the log below:
Any workarounds to suggest @Danny2097 @creat89 ? thanks
The text was updated successfully, but these errors were encountered: