I need to pull performance Analyser logs for a month's worth of data

We have inconsistant performance with an app and I need to pull the performance analysis log for a month to try and see a tendency in the data...

(I tried live chat for my issues, but either it is answered by bots, or the people there couldn't care less about Apsheet users.)

The issues are the following:

  1. The performance log analyser limits us to 1000 data points, and that is only about 8 hours of data. I need about a month.
  2. The output format is a Json that is almost impossible to analyse with a spredsheet app (it has objects inside objects inside objects [...] !)
0 13 335
13 REPLIES 13

Steve
Platinum 4
Platinum 4

Is there a question?

Hello Steve,
Thanks for pointing it out, I was really mad when I wrote the post because support is getting really hard with Appsheet and we rely a lot on the service (we were Appsheet Partners when it was a thing) Il will try again without the associated emotions.

 

So, the questions:

  1. How can I pull a month worth of performance data at once for my app (it is there, I can't access it all at once)
  2. How can I pull the data in a human usable format? (csv, spreadsheet of any kind)
    We managed to parse the Json somehow but the process is far from optimal and requires a lot of human transformation on the data.

To the best of my knowledge, what you see is all that's available--I'm not aware of any other means to pull larger logs, or to get them in any other format. Admittedly, though, I've never tried either, and wouldn't know where to start.

@Marc_Dillon @MultiTech @Grant_Stead Do any of you have any insight to share?

Thanks for the mention Steve. Sure, I can add a few points.

1. What plan type are you on? An Enterprise plan will allow you to do a little bit more with the logs, like filter them out by operation type. Although it still only shows 1k logs per search. Perhaps with an appropriate filter to see just what you need to see, that limitation won't be a very big deal. Like maybe 1k logs would be a few days instead of just 8 hours.

2. JSON is a pretty standard format. I don't understand why you would be complaining about that. If you need help parsing that into some other format that is better for you, or extracting specific bits of the data, I can probably help and I'm available for hire.

3. I'm not really sure what you're hoping to achieve by analyzing the performance data. App performance is mostly affected by data size in conjunction with too many expensive virtual columns. You may have just built your app poorly, and analyzing the performance data isn't going to help at all. Maybe try to search these forums first for some threads about avoiding virtual columns and improving performance?

Thank you both

  1. We have an enterprise plan, and 1000 datapoints, filtered by only the relevant data I need (app syncs) gives me 7,5 to 8,5 hours of data.

  2. I am well aware that JSON is common, but the way these logs are built nests multiple levels of data within the same objects.
    It is IMO a really unefficient way to show that kind of data and to find relations between the data points... a csv would both be lighter and easier to parse and read.
    Does google offer tools to "unfold" those JSON reports? (our company is mainly on O365, we were using Appsheet before Google purchased). We are able to parse those JSON, but it takes time and it is painfull especially when I know for a fact that this data was stored in a DB in the backend and should be easier to display in the form of a table that in the form of nested objects.

  3. We are well aware of the performance improvements that can be made on an appsheet app. We avoid virtual columns as much as possible and the DB is on an SQL server because spreadsheets were completely out of question for that kind of project.
    The app normaly syncs under 10 second, and that is fine. But we see many syncs taking up to 45-60 seconds but can't find obvious relations between the events.
    Our customer asked us to find tendencies in the data to pinpoint the issue (Time of day, user, day of the week, what table was causing the slow-down, etc.)

I am having a hard time not putting emotions in there because we are long term partners with Appsheet (since year 1 if I am not mistaken) and used to have relations in there with people we could call directly to get support, and now that google purchased, they are dismantling what Appsheet used to be, firing people and have stopped helping us as they used to. Thanks for helping us in here guys!


@Normand_Nadon wrote:

I am having a hard time not putting emotions in there because we are long term partners with Appsheet (since year 1 if I am not mistaken) and used to have relations in there with people we could call directly to get support, and now that google purchased, they are dismantling what Appsheet used to be, firing people and have stopped helping us as they used to.


I feel you, brother. 😞

Hi Steve, is it possible to download audit logs for year to date?

In short, no.

Troubleshooting things like this is... challenging.

Really, to make this work - and this isn't easy on YOU (the developer) - you need to really be there in the performance logs WHEN the event is happening.  Immediately when someone complains about something, that's when you've got your window to look.

Then it's up to you to store certain data for later analysis, etc. etc.

----------------------------

There is no easy way to troubleshoot speed issues; especially if you've already searched the forums and followed all the advice there.

I had a meeting with some  people a couple of weeks ago, to help them find the source of their slow performing app.  In the hour we had together, we went down the basic list of suspects - but they'd already solved these issues, having found something about the issues in the community.  By the end of the hour, we were no closer to a solution.  

If you've followed all the best practices in the community and your app is still acting slow - it's a bigger issue than just "a single virtual column that's taking forever to calculate" (or something similar). 

When you're at this point, where you've done everything you can find to make things efficient, but your app is STILL running slow, the solution is a systemic rebuild of your data structure.

death by computer bug.gif

Which no one wants to do

  • But that's the solution; you've got to fundamentally rethink how you're doing things

--------------------------------------------------------------------------

Not what you want to hear, I know... but the truth is the truth; no getting around it.

Chances are though... if in general your app is performing well, and only certain times you're having trouble... that's probly a transient issue (internet congestion on that person's local neighborhood node, or something like that) that will solve itself.

What I would look for:

  • Is it happening to the SAME person?
  • Is the same building? Location?
  • Is it a specific role or dept?

Basically... is there anything in common between the events.  

Most of the time there isn't, and it was a transient issue with the internet or something. (Could be, if they're on a PC, that the PC is doing stuff in the background (like a disk check) that's eating resources that you're not aware of.) 

Point being... typically these sort of random slow events are not connected in cause.

Man, I posted a long reply and the page crashed! I have to re-type it all 😫

I am putting the kids to bed and I will reply later tonight!

That is exactly the kind of information I am after... My issue is that 7 or 8 hours of data is not enough to see a trend
What I want to map is:

  • Who does it happen to
  • When does it happen (to see if the server was doing something else at that moment)
  • How many users are online at that moment
  • [...]

Point being... typically these sort of random slow events are not connected in cause.

On an 8 hour basis, of course it is impossible to draw a conclusion... Statistic analysis needs to be calculated against a lot more data to be able to see a trend. That is why I need more than the 1000points of data.

I had to resort to the least optimal solution...
We are pulling data, by 7-8 hours blocks for multimple days in the past... It is a very long process, but job needs to get done.

I really wish there was a way to access the performance analysis data from the API or from some other way... It is there, just not easily accessible!

If someone finds a solution, don't hesitate to post it, even in a year or two from now... It could be useful! 

Top Labels in this Space