Statistical Algorithms Importer: FAQ
From Gcube Wiki
Revision as of 10:22, 12 July 2016 by Giancarlo.panichi (Talk | contribs) (Created page with "F.A.Q. of Statistical Algorithms Importer (SAI), here are common mistakes we have found. == In some cases, an algorithm worked in R Studio but did not work via SAI. == This...")
F.A.Q. of Statistical Algorithms Importer (SAI), here are common mistakes we have found.
In some cases, an algorithm worked in R Studio but did not work via SAI.
This kind of issue is usually related to the production of the output files:
- The file was produced in a subfolder, but is was declared to be in the root folder. E.g. the file output.zip was produced in the ./data folder by the process, but in SAI the variable referring to the output was declared as
output<-"output.zip"
- Thus with no ./data indicated in the file name
- A forced switch of the working folder was done inside the code, which mislead the service about the produced file. E.g.:
output<-"output.zip" setwd("./data") save(output)
- switch of the working folder inside the script should be generally avoided.
- A process tried to overwrite another file that had already been produced on the processing machine, but which was corrupted due to an update of the machine. This conflicted with the newly generated files.
- Generally, files with new names should be generated by a script that is being transformed into a web service. Generating output files with new names prevents errors due to several concurrent requests creating the same files, when the requests are managed by the same machine.
- For example, instead of declaring
zip_namefile <- "data_frame_result_query.zip"
- The timestamp should be added to the generated file:
zip_namefile_random <- paste("data_frame_result_query_",Sys.time(),".zip",sep="") zip_namefile <- zip_namefile_random