Let your data scientists do data science.

Learn more Request a demo


Job scheduling

Use Cadence's intuitive scheduling interface to quickly set up complex triggers for your data operations. No need to learn and configure Airflow or other complicated scheduling packages. Cadence can intelligently modify your schedule to avoid data corruption and maximize throughput.

Distributed execution

Choose which hardware you want each job to run on — Cadence will spin up the machine of your choice, load your supporting libraries, execute your job, and clean up afterwards.


Track the health of your jobs — if they're failing, Cadence provides error logs to help you debug quickly. If asked, Cadence will learn your jobs' write patterns and can notify you of subtle data errors that might not otherwise raise execution errors.


Cadence lets you define SQL hooks into your data targets. You can pull your data into your desired analysis environment with the hook you defined simply by hitting an API endpoint. Cadence even allows you to parametrize hooks for greater flexibility.


1 Register data targets

Register the database nodes and database tables (data targets) that your jobs will operate on. In most cases, this is as easy as providing a connection string.

2 Upload kits

A kit is a zipped folder containing your data operations code and any necessary supporting libraries. Upload your kits through the Cadence interface.

3 Schedule jobs

Once you have kits, define and schedule jobs to execute your code. To create a job, you supply a file, the schedule to run the job, and the hardware that should be provisioned. You can also specify inter-job dependencies and enable advanced health checks.

4 Monitor results

As your jobs are run, the Cadence dashboard displays the health of each of your jobs. The dashboard shows the output your jobs produce to enable easy debugging. Additionally, Cadence can notify you of subtler data errors based on that job's previous activity.

5 Define hooks

Through the Cadence interface, provide a SQL snippet that acts on a certain target, and we'll give you an API endpoint to return the result of that query as a CSV or JSON file. This step is optional -- you can directly query your data targets with SQL, of course.

6 Build models

Congratulations! You can now query your data targets with your hooks or directly in SQL. Relax — your data scientists can take it from here :-)


Choose tools that watch your back.

Fine-tune Cadence's access level to each of your data targets. Your data is always stored on your own instances, and all your sensitive metadata that we're required to store is encrypted.


  • Implicit dependency inference prevents concurrent writes
  • Intelligent monitoring discovers abnormal events
  • Dependency cascade halts execution to avoid data corruption


  • Data is always stored on your own instances
  • Access credentials are encrypted on our servers
  • Job environment variables can be encrypted

Get in touch

Interested? Let's talk.

Whether you just use data for business intelligence or data is your core competency, Cadence can save your business time and money. All inquiries are strictly confidential.


Copyright © 2018 Cadence Data. All rights reserved.

Any data you submit to us through the above form is for internal use only and will never be shared, released, or otherwise published withour your consent. Your inquiry is confidential.

Follow us on Twitter @CadenceData.

We use cookies to personalize content and ads, to provide social media features and to analyse our traffic. We also share information about your use of our site with our social media, advertising and analytics partners who may combine it with other information you’ve provided to them or they’ve collected from your use of their services.