Is there an AppSheet function to access string in AppSheet's search criteria UX field?

I want to use an AppSheet Action to store the search criteria (the one entered in the native AppSheet search field) in a table row. I want to perform SQL server-side batch processing on the subset of records shown to the user based on their search criteria.

Screen Shot 2020-05-27 at 8.32.28 AM

If I mark each visible record with a process_this_row=true, all these changes (dozens) need to sync back to the server first, which takes awhile. Instead, I’m imagining sending the search criteria to the server as a single new row in a control table, triggering a batch job using the search criteria from the AppSheet search field.

Is there a function in AppSheet to get the search criteria string? Or a function that returns a list of ids for the filtered records? Or is there a way to save changes to server in a sub-second (or near sub-second) batch?

Nope. :frowning: You’d have to create your own search form as an alternative.

Nope.

I don’t understand this one. :confused:

The last question

Or is there a way to save changes to server in a sub-second (or near sub-second) batch?

is about watching the pending row updates sync one row at a time back to the server. Our use case involves flagging a large collections of rows as reviewed=true using an Action. The user reviews all the rows, then uses the Action to batch update the rows they’ve just reviewed. This works well within the app, however the server needs to wait many seconds for the updates.

For 25 rows, this shows up as 25 in the red dot next to the sync button at the top right. We watch this number gradually decrement over the course of about 15-30 seconds. The app has low latency, but the server lags behind until the updates are complete.

Is it possible to expedite the server updates through the design of our app or a setting in AppSheet?

1 Like

Ah! Thanks for the explanation!

I can’t speak from experience, but I believe a SQL-based data store isn’t subject to the delay. My impression is that the slow pacing is to avoid rate limits with Google and other cloud storage providers.

But, since this outside my experience, I’m going to invoke @Aleksi in the hope he can shed some light.

@Steve, did @Aleksi have a chance to look at how changes can be sync-ed back to the database more quickly?

Had he, he would have commented here, I imagine. You could contacting support@appsheet.com, too, for a more formal response.

@Stuart_Carmichael Sorry about the delay.

When the user multiselects records and then the action is triggered, yes it will happen one by one and as far as I know, there is no workaround unless you are able to do that with a Workflow/Data Change & “Execute an action on a set of rows”. That will do it with one action. Though you need to find a way how you can select those records other than multiselect from a table view.

1 Like

We do use the “Execute an action on a set of rows” pattern and it makes all the expected updates on the client side, then we watch them gradually sync back to the underlying database (AWS RDS in our case).

My understanding is that client updates back into the cloud are throttled to respect Google Sheet throughput policies. Since we’re not using Google Sheets to back our tables, we can hopefully get away with relaxed throttling.

Our usual work around for this situation is with a table (for example TaskDt) in our AWS RDS into which our AppSheet client inserts, using an Action, a single row with parameters (for example 'FLAG_ALL_RECORDS', 'stuart@foodmesh.ca', '2020-06-03'). A database-side trigger detects the single update, then updates all the records in the other table.

CREATE TRIGGER `TaskDtInsert` BEFORE INSERT
  ON TaskDt ...

The next time the App syncs, it loads the changes we made behind the scenes.

We’re trying to avoid this pattern for the app we’re currently developing.

Are you trying to avoid the needed sync or something else? I’m not sure…

We have strategies that mitigate it, but it works best for us to stay within the AppSheet flow of client updates synced back to the backing tables. Ultimately, we are looking to have sync happen faster. Because we are working with AWS backing tables, we are hoping avoid throttling that might apply to Google backing tables.