🖨️
DataSF | Publishing Process
  • Introduction
  • Why Publish Data?
    • Publishing Data Standards
  • Publishing Specifications
    • Kickoff
      • Breadcrumbs and Inventory
    • Privacy Toolkit
      • Privacy Toolkit Form
    • Security Toolkit
  • Data Pipeline
    • Pipeline Basics
      • Manual Publishing
      • Data Pipeline
      • Common Transformations
  • Metadata
    • Intro to Metadata
      • Metadata Standards
  • Publishing & Maintenance
    • Review & Publishing
    • See our other explainers
Powered by GitBook
On this page
  • What is manual publishing?
  • How to manually upload data
  • Required fields
  1. Data Pipeline
  2. Pipeline Basics

Manual Publishing

You are the data pipeline

PreviousPipeline BasicsNextData Pipeline

Last updated 1 year ago

What is manual publishing?

Manual publishing is the process uploading data from your computer onto the Open Data Portal. This approach is popular if you are working with data from a spreadsheet (Excel or .csv) or have a static map file (Shapefile, GeoJSON) which you do not plan to change.

On the positive side, this is the quickest way to get data onto the open data portal. On the negative side, you will have to repeat this process every time your dataset is updated.

If your dataset needs to be updated Monthly, Weekly, Daily, or Hourly this is not a good option (see for alternatives).

How to manually upload data

Our Open Data Portal vendor ('Tyler Tech') has an explainer describing all the steps Log into the , hit the plus sign in the upper right, then follow the instructions.

Please watch to learn more about manually publishing data.

Required fields

We require every dataset to contain two colums: data_as_of and data_loaded_at. Though they sound similar, they are unique and important.

  • data_as_of: Timestamp when the record (row) was last updated in the source system. Said another way, this is how fresh this row of data is.

  • data_loaded_at: Timestamp when the record (row) was was last updated here (in the data portal). For manual uploads - current datetime stamps can be added using column transformation in Socrata: to_floating_timestamp(source_created_at(), 'US/Pacific')

When manually uploading data, you may need to create a new column for the "data_loaded_at" field. You can find instructions to or can reach out to support@datasf.org for help.

Please see the for a full overview of metadata standards.

add a new column here
metadata section
data pipeline
here.
open data portal
this video series