I recently had a user of an app manage to accrue 444 pending syncs on their app that had recently had a data schema change.
In attempt to help this user not have to re-enter nearly 12 hours of work, I moved the app into โRecovery Modeโ. According to the documentation, this should ignore sync errors for the user and send all pending data in a Recovery.txt file to the app owners file storage.
Recovery mode did not allow the userโs app to sync, so we went the route of โReset Changesโ which also creates a recovery file. However, in this case, it created 444 recovery files!! Each one containing the data for a single sync operation.
The files are in JSON, which can be converted more or less easily to CSV / tabular format, or moved to the SQL database via appscript or something. But what Iโm stuck on is how to process 444 individual files.
Anyone have any creative ideas to collate and parse the files? Maybe a Python boilerplate I can tinker with?
@Bellave_Jayaram @MultiTech_Visions @Grant_Stead @Jonathon @Derek @QREW_Ben @CJ_QREW
Solved! Go to Solution.
For the second half of the fix, I wrote a couple of Javascript functions to parse the JSON data in the collated file and extract only the row data for each table that had sync data. I ran these functions in the Chrome browser console (because you can execute JS from there) and then copied the console log data into json2table.com to get it in tabular format. Then I pasted them into a spreadsheet to then clean for entry into the DB.
Interesting note: It turns out there were only 200 files (200 rows) in the Recovery File, not 444. I wonder if AppSheet caps the Recovery Data process at 200 sync items?
@praveen are you aware of any limit on the recovery file creation?
I suppose I could have written a few more lines of JS to just generate the html table, or even just run the whole bit in AppScript and used JDBC to go right into the DBโฆ Next time
Hereโs the code for anyone who has the same issue.
function getTables (data) {
var tableNames = [];
for (i = 0; i < data.length; i++) {
tableNames.push(data[i].tableName);
};
var distictTableNames = [...new Set(tableNames)];
console.log(distictTableNames);
return distictTableNames;
};
function extractRows (data) {
var tables = getTables(data);
var numTables = tables.length;
var betterData = [];
for (j = 0; j < numTables; j++) {
console.log("entering for loop")
var thisTable = tables[j];
console.log("looping through this tables rows: "+thisTable );
var filterData = data.filter(table => table.tableName === thisTable);
betterData.push([]);
console.log(`${thisTable} has ${filterData.length} rows`);
for (i = 0; i < filterData.length; i++) {
betterData[j].push(filterData[i].row);
};
console.log(JSON.stringify(betterData[j]));
};
console.log(`Looped through data and extracted rows for ${numTables}: ${tables}`);
return true;
};
If this is an isolated incident, one quick way of handling it would be to run a VBA script to merge all .csv files in a folder into one workbook with many sheets.
From there its pretty easy to bash all of those sheets together into a single master sheet by using INDIRECT() and ROW() formulas to select the cells from each unique sheet.
Janky, but fast and effective.
Can you give an example of the data structure in these files?
Creating a quick script, or some spreadsheet finagling would probably get you what you need.
@Marc_Dillon Hereโs an example (with some client data removed)
{
โtableNameโ: โVenueSpaceโ,
โrowโ: {
โ_RowNumberโ: โ1167โ,
โIDโ: โMGM474โ,
โCompanyIDโ: โUPINPX6Bโ,
โVenueSpaceTypeIDโ: โ3F0D3F00โ,
โDesignationโ: โMeeting Roomโ,
โRoomNameโ: โRoom 301โ,
โVenueMapLocationโ: โโ,
โFloorLevelโ: โ3โ,
โUnitOfMeasurementโ: โFeetโ,
โDimensionLengthโ: โ38.67โโ,
โDimensionWidthโ: โ28.83โโ,
โCeilingHeightโ: โ11โ-14โโ,
โChandelierโ: โโ,
โRiggingHighBeamโ: โโ,
โRiggingLowBeamโ: โโ,
โSoffitโ: โโ,
โTotalSqFtโ: โ1115โ,
โBuiltInAVโ: โNโ,
โAVNotesโ: โโ,
โBuiltInFurnitureโ: โโ,
โBuiltInFurnitureNotesโ: โโ,
โSpaceFloorplanโ: โโ,
โLocationInFloorplanโ: โโ,
โNotesโ: โโ,
โCreationDateTimeโ: โโ,
โModifiedDateTimeโ: โ03/26/2020 18:25:33โ,
โRelated Venue Space Subsโ: โโ,
โRelated Venue Space Capacitiesโ: โ91693 , 91694 , 91695 , 91696 , 91697 , 91698 , 91699 , 91700 , 91701โ,
โComputed Dimensionsโ: "38.67โ L X 28.83โ W ",
โBuiltInAV_calcโ: โNoโ,
โRelated Venue Linksโ: โโ,
โRelated Venue Imagesโ: โโ,
โRelated Venue Infosโ: โโ
},
โappVersionโ: โ1.000677โ,
โuserSettingsโ: {
โ_RowNumberโ: โ0โ,
โ_EMAILโ: โโ,
โ_NAMEโ: โโ,
โ_LOCATIONโ: โโ,
โOptions Headingโ: โโ,
โRoleโ: โSuperโ,
โOption 2โ: โโ,
โCountry Optionโ: โโ,
โLanguage Optionโ: โโ,
โOption 5โ: โโ,
โOption 6โ: โโ,
โOption 7โ: โโ,
โOption 8โ: โโ,
โOption 9โ: โโ,
โ_THISUSERโ: โonlyvalueโ
},
โmechanismโ: โFormโ,
โviewNameโ: โVenue Space Formโ,
โtypeโ: โUpdateTableRowโ,
โactionTypeโ: โUpdateTableRowโ,
โfailureCountโ: 0,
โrequestIdโ: 57856484,
โtimestampโ: โ2020-03-27T01:25:47.79Zโ,
โtimeStampโ: โ2020-03-27T01:25:47.79Zโ
}
Are these to be synced to a single table? Are the values to be synced to the same columns in every JSON file (in other words, does each JSON file have the same number and named columns to be synced)?
Iโm assuming yes.
424 are from one table. Roughly 20 are from another table.
To Jonathans point, that would be pretty effective but the files are all in JSON format, not yet CSV. There are easy tools to parse a JSON file into CSV, but Iโd still have to collate the files.
Iโm mostly stuck on Txt file collation
I got the first part figured out!
I joined them into one file with a Command Prompt command.
Now to parse the 13,000 line JSON file
Thanks for the suggestions, everyone!
Hahaha!!
For the second half of the fix, I wrote a couple of Javascript functions to parse the JSON data in the collated file and extract only the row data for each table that had sync data. I ran these functions in the Chrome browser console (because you can execute JS from there) and then copied the console log data into json2table.com to get it in tabular format. Then I pasted them into a spreadsheet to then clean for entry into the DB.
Interesting note: It turns out there were only 200 files (200 rows) in the Recovery File, not 444. I wonder if AppSheet caps the Recovery Data process at 200 sync items?
@praveen are you aware of any limit on the recovery file creation?
I suppose I could have written a few more lines of JS to just generate the html table, or even just run the whole bit in AppScript and used JDBC to go right into the DBโฆ Next time
Hereโs the code for anyone who has the same issue.
function getTables (data) {
var tableNames = [];
for (i = 0; i < data.length; i++) {
tableNames.push(data[i].tableName);
};
var distictTableNames = [...new Set(tableNames)];
console.log(distictTableNames);
return distictTableNames;
};
function extractRows (data) {
var tables = getTables(data);
var numTables = tables.length;
var betterData = [];
for (j = 0; j < numTables; j++) {
console.log("entering for loop")
var thisTable = tables[j];
console.log("looping through this tables rows: "+thisTable );
var filterData = data.filter(table => table.tableName === thisTable);
betterData.push([]);
console.log(`${thisTable} has ${filterData.length} rows`);
for (i = 0; i < filterData.length; i++) {
betterData[j].push(filterData[i].row);
};
console.log(JSON.stringify(betterData[j]));
};
console.log(`Looped through data and extracted rows for ${numTables}: ${tables}`);
return true;
};
Better data
Kinda seems like you have just established the best practices my friend.
This is lameโฆ Like it would be cool if we had some appsheet developer tools to be able to handle this recoveryโฆ
Recovery used to be epic. Youโd turn on recover mode and the system would spit the records into the data source - it was up to you (the developer) to be there to catch and fix the records as they came in.
Then it turned into a nightmare. So much so that when a client and I faced trying to recover a couple hundred (not even 400 so Kudos to @Stefan_Quartemont), after spending an hour trying to find a smooth way to recover the records we abandoned it and had people re-enter things.
There needs to be a better way.
@Grant_Stead, @MultiTech_Visions - I finished a prototype node.js app that collates recovery files and scans the structures, builds a temporary table schema in a DB and generates insert statements for each row in each table and send them those temp tables.
I think this could become a web service: zipped folder of recovery data in >> CSV data out. then you can load that in your datasource how you like.
And apparently there is a โnewโ action that allows the user to import via CSVโฆ maybe the whole recovery process can be kicked to the user to handleโฆ
CSV upload into the App Data Sources would be AWESOME. And make this process a little easier perhaps.
Hi Stefan, thank you for creating this tool. I tried it this weekend but for some reason I can't get the final step to work.
I've successfully uploaded the files, but the "Process" button doesn't become grey and clickable as per step no 7 in your instructions.
Your advice will be much appreciated.
User | Count |
---|---|
36 | |
33 | |
27 | |
23 | |
18 |