Manual Publishing
You are the data pipeline
Last updated
You are the data pipeline
Last updated
Manual publishing is the process uploading data from your computer onto the Open Data Portal. This approach is popular if you are working with data from a spreadsheet (Excel or .csv) or have a static map file (Shapefile, GeoJSON) which you do not plan to change.
On the positive side, this is the quickest way to get data onto the open data portal. On the negative side, you will have to repeat this process every time your dataset is updated.
If your dataset needs to be updated Monthly, Weekly, Daily, or Hourly this is not a good option (see data pipeline for alternatives).
Our Open Data Portal vendor ('Tyler Tech') has an explainer describing all the steps here. Log into the open data portal, hit the plus sign in the upper right, then follow the instructions.
Please watch this video series to learn more about manually publishing data.
We require every dataset to contain two colums: data_as_of and data_loaded_at. Though they sound similar, they are unique and important.
data_as_of: Timestamp when the record (row) was last updated in the source system. Said another way, this is how fresh this row of data is.
data_loaded_at: Timestamp when the record (row) was was last updated here (in the data portal). For manual uploads - current datetime stamps can be added using column transformation in Socrata: to_floating_timestamp(source_created_at(), 'US/Pacific')
When manually uploading data, you may need to create a new column for the "data_loaded_at" field. You can find instructions to add a new column here or can reach out to support@datasf.org for help.
Please see the metadata section for a full overview of metadata standards.