API Workflow: Add Rows to Another Table

image

@tsuji_koichi

:thinking:

1 Like

Is there something additional I’m missing here, I have the API enabled, the access key is enabled.

This is the URL in the workflow

https://api.appsheet.com/api/v2/apps/fac398d4-b87e-4032-96c4-7d13a9d4531a/tables/Order%20Details/Action

All the configulation you made looks fine to me.

I wont want to recommend you as it will add more of works to you, but in the past I did when I face similar issue.
Just copy json body.
And delete the workflow and create from scratch.

I remember it got through at that time, it is funny story though.

2 Likes

This seems to be the fix for more than just workflows sometimes.

  • Views, actions, workflows… anything that has a “digital element” inside the AppSheet editor - the little container thing we can expand/collapse.
    • It’s like, sometimes these get corrupted and need to be deleted in order to fix.

(This is rare though)

2 Likes

Thanks, still no luck with it unfortunately :slightly_frowning_face:

1 Like

Try this:

Remove the [Mech/Elec] column from the body - see if that forward slash is causing the problem.

Ultimately it looks like everything else is fine… which leaves either:

  1. something’s wrong with the body payload; or
  2. it’s a bug.

You may also need to remove the dashes as well from [Sub-Area]

2 Likes

Hi @MultiTech_Visions ,

Your recommendations/suggestions are always very insightful. May I ask the following?

So is this understanding correct that if there is a SELECT() to be run on large data set with conditions such as isnotblank([Date/Time Requested]), better to first create a slice and run any further SELECT() required on that slice?

Also if an app requires two SELECT() s to run on a table, first select with condition A, B , C and the second one with A, D, F, then is it better to create a slice with condition A and then run two SELECT() s on remaining nondependent conditions?

2 Likes

Absolutely! But not just for large data-sets - this is how everyone should be building their apps.

When making the original AppSheet videos we missed this

When we were making the original AppSheet videos 5 years ago, we completely missed covering this topic.

To be fair to myself, I had no idea this was a thing that we should do. It wasn’t until I started working with clients to create complex database style apps, that I realized how much of a performance hit manually constructed SELECT() formulas have. Praveen had mentioned something like this over and over, but it never really made sense what he was talking about - until I was forced to deal with poorly performing apps.

In your traditional database data schema, you’ll create “views” - which are essentially Slices in the SQL world - that hold these subsets of data. They become extremely helpful when you’re implementing complex formulas that need to run over large amounts of your data.

Learn more here:
https://www.essentialsql.com/what-is-a-relational-database-view/


The ability to create efficient data subsets is a skill that’s missing from most AppSheet builders’ toolbox.

The point is to take the universal conditions from your select, and hold that sub-set in a slice.

  • This migrates when those calculations are done.
    • With a SELECT() the criteria is being evaluated "once for every row for each record being evaluated";
      • Because your SELECT() is running all that criteria for each record it’s considering.
    • When migrated to a Slice, the criteria is being evaluated “once for every row during updates”

This becomes especially helpful when combined with REF_ROWS()

  • Instead of some SELECT() running on each record, you create a subset (slice) and ref_rows to that.

:unicorn::unicorn: & :rainbow::rainbow:

4 Likes

(A little more info would help clarify this one…) but in this instance I would actually create two slices:

Slice A

  • Conditions A, B, C

Slice B

  • Conditions A, D, F

Then run any of your SELECT() or REF_ROWS() over those instead of the main table.

2 Likes

Thank you very much @MultiTech_Visions for a detailed reply and the informative video that made the concept crystal clear.

So if I may sum up, the trick is to use the complex expressions to filter rows in the slice and then use REF_ROWS on those slices. Excellent tip.

I was incidentally testing some other expressions on a large dataset couple of days ago. I will test with the slice_ REF_ROWS tip shared by you and revert with any observations or queries I may have.

4 Likes

I think the tip shared by you will be applicable in this case as well. Thank you.

2 Likes

Thank you very much @MultiTech_Visions. @Sarah_Keown, hopefully this will solve some of our issues.

1 Like

@MultiTech_Visions, slices are evaluated on the client side so adding slices will add to sync times. In your experience, is this increase in sync time still significantly less than using Select expression?

2 Likes

The net result is a reduction in sync times when migrating from a bunch of independent SELECT()s vs. slices.

So if you were to put things on a scale 1 - 10 (10 being bad)

  • SELECT()s
    • I would put at least above a 5; 6.5 is a good number (but they can quickly rise into the double digits. haha)
  • Slices (which largely depends on the formula you’re using, I’m talking about simple equalities or IN()s, with some AND() and OR() mixed in - no SELECT()s)
    • I would put at a 3

Again, it largely depends on the formulas you’re using.

  • What you really have to take into consideration is how many records is the select() running on?
    • This plays a huge role in how long it takes to run as well.

Welcome to the never-ending balancing act

50x50

4 Likes

Hi all, just an update on this issue as I haven’t responded in some time.

I eventually got the API workflow to work correctly, adding a bulk number of rows from one table to another and updating the status.

The reason behind this upgrade was due to the poor sync speed/performance when applying the ‘Add rows to another table’ action.

We had approx. 50 syncs taking 30 minutes when the action was applied to the process. Roughly 30-35 seconds per sync I believe.

Now, with API its sadly not improved as we hoped with the app taking roughly 45 seconds per sync. I am trying to work on this still but an issue arose today and I was applying the same action.

Using the Add rows to another table action, I was doing a very straight forward copy of 5 rows.

These 5 individual syncs took 1 full minute to process, which doesnt seem like much and is certainly an improvement compared to the previous action but the app will be processing much more than 5 rows, there was a recent example of up to 60 rows being copied.

By that standard it would take about 15 minutes of sycing.

I think the action works very well and is an extremely useful tool for our app but I can’t understand why it is taking so long, espeically when applied in the most basic circumstance.

Any further explanation or understanding of this would be appreciated

How long does just adding one row take? I’ve never had my add rows take more than a couple seconds each let alone 30+ seconds. Do you have lots of columns or complex initial value formulas? From the API it only seems like 13 or so columns which should take next to no time at all.

The Order details table has 61 columns.

The action itself seems straightforward to me as I’m just taking a copy of the data, there is no app formula applied, no complicated lookup or select functions included. See below screenshot of the action. The most elaborate part it the qty requested/qty ordered column values (it contains a subtraction formula).

The screenshot is split so you can see the data closer.


image

For one record to be duplicated it is taking about 8 seconds according to the audit log?

The API on the other hand, is slightly more complex see below action.

Even with the complexities it still seems like a great deal of syncing? Especially if your saying you haven’t seen times like we have reported.