Avoid updating existing rows while importing csv file

Dear Friends,
I have a scenario where I would like to import csv data on to a given table, within which I need that only newly rows to be added and any of the existing row matching with any row in csv should not be updated at all.
Would appreciate any idea to deal with above scenario
Thanks in advance!.
Regards,

Solved Solved
0 3 600
1 ACCEPTED SOLUTION

The best approach to any similar situation is to avoid the problem at the source. 

You want to avoid the already-existing rows up front it at all possible.  Otherwise, if you are just blindly creating CSV dumps to import into AppSheet, you could end up with files that have more existing rows than new rows.  That wouldn't be a good place to be - performance-wise.  CSV imports into AppSheet do have a time limit.

You haven't mentioned what system(s)/app(s) your are dealing with...but at the point where you create CSV, if you have any way to recognize that rows already exist in AppSheet then use that to avoid adding the rows to the CSV file.

If you don't have a way to identify existing AppSheet rows, it may be worthwhile to add it depending on the system(s)/app(s).  Since you want to avoid overwriting AppSheet rows, that implies that AppSheet is the system of record - i.e. the main system?  There could be other benefits to the external system knowing what already exists in AppSheet...again depending in the system(s)/app(s) you are dealing with.

If you can't eliminate the already-existing rows up front then there are 2 options:

1) That suggested by @Marc_Dillon  above.   

2) Allow ALL the rows will be added as NEW rows and then run a process to remove the duplicates.  For this to occur, make sure the CSV import file does NOT contain key values or else Updates occur instead of adds. This approach assumes, of course, that you have a way to identify the duplicates OTHER than by row key.

View solution in original post

3 REPLIES 3

You could load the CSV into a separate table that is solely for importing, then have a Bot trigger on each new record that adds to the original table if they don't already exist. This is probably not a suitable setup if you're talking about a large number of records, I'd probably write an App Script in that case.

Any other better option @WillowMobileSys @Steve @Suvrutt_Gurjar 

The best approach to any similar situation is to avoid the problem at the source. 

You want to avoid the already-existing rows up front it at all possible.  Otherwise, if you are just blindly creating CSV dumps to import into AppSheet, you could end up with files that have more existing rows than new rows.  That wouldn't be a good place to be - performance-wise.  CSV imports into AppSheet do have a time limit.

You haven't mentioned what system(s)/app(s) your are dealing with...but at the point where you create CSV, if you have any way to recognize that rows already exist in AppSheet then use that to avoid adding the rows to the CSV file.

If you don't have a way to identify existing AppSheet rows, it may be worthwhile to add it depending on the system(s)/app(s).  Since you want to avoid overwriting AppSheet rows, that implies that AppSheet is the system of record - i.e. the main system?  There could be other benefits to the external system knowing what already exists in AppSheet...again depending in the system(s)/app(s) you are dealing with.

If you can't eliminate the already-existing rows up front then there are 2 options:

1) That suggested by @Marc_Dillon  above.   

2) Allow ALL the rows will be added as NEW rows and then run a process to remove the duplicates.  For this to occur, make sure the CSV import file does NOT contain key values or else Updates occur instead of adds. This approach assumes, of course, that you have a way to identify the duplicates OTHER than by row key.

Top Labels in this Space