Launching jobs on Lakeflow
This tool is an opinionated way to spawn compute jobs on the cloud. By "compute
job", I mean a massively parallel data processing job like training a deep net,
analyzing a large corpus of text that's sitting in an S3 bucket, or 1000
parallel simulations of something. To let you do these things, this package
asks you to author your code as a Python package and forces you to specify your
package dependencies in a pyproject.toml. It then uploads that package (as a
python wheel) for Databricks to execute it.
This is heavier-weight than Databrick's built-in notebook approach of editing a Python script in their web UI. In return, it lets you capture large package dependencies across repos via git submodules, and import third party packages via uv. It's lighter-weight than most other job submission systems because it doesn't require you to build docker containers. Docker containers take a large snapshot of your system, enough to build a full unix environment. These snapshots are on the order of gigabytes and difficult to upload from a home computer. For most of our work, wheels provide all the containerization we need (a wheel is a few kilobytes).
It has one more opinion: That uv is a good way to capture those Python
dependencies, with a pyproject.toml. We're also exploring
Pants as a way to manage more complex packages. Pants can also export wheels, so nothing in this design prevents us from adoptig Pants.
You can use this tool to build your wheel, upload it to Databricks, spawn copies of it each with different command line arguments, and track your jobs's status. You can also use a Databricks UI to check the state of your jobs. The tool provides several interfaces:
- An MCP server so you can have AIs spawn jobs for you.
- A CLI you can use from the shell.
- A programmatic Python interface you can call from a Python program.
Getting access to Databricks
Check if you have access to Databrick by visiting this url. If you get stuck in an infinite loop where Databricks sends you a code that doesn't work, it means you don't have an account. Ask for one in #help-data-platform.
Your package's structure
This package assumes the package you want to run on the cluster has a
structure like this and it can be run with uv run:
my_project/
├── pyproject.toml
├── src/
└── my_package/
├── __init__.py
└── my_package_py.py
It also assumes you've added an entry point to your pyproject.toml called
"lakeflow-task". If your package is called my_package, and it has a driver
script called my_package_py.py, and the main function in this script is called
main, you would define the "lakeflow-task" entry point like this:
[project.scripts]
lakeflow-task = "my_package.my_package_py:main"
The package `lakeflow_demo` under this directory gives you a concrete example of how to set up a package.
Building and launching your package with the CLI
To run the package on the cluster, first build the wheel, then upload it, then tell Databricks to run it.
To make it easier to track lineage for your artifacts and your runs, the build
step embeds the current git commit hash into the wheel version (e.g.
0.1.0.devabcdef1234...). This requires all changes in your working tree to
bemust committed before building. Otherwise, the build will fail with an error
asking you to commit or stash.
Create the job from source:
You can use
create-job-from-sourceto build, upload, and create the job.If you don't pass a
--cluster-id, a new cluster is created automatically:uv run lakeflow.py create-job-from-source \ "my-lakeflow-job" \ "my-package" \ --pyproject-dir-path ~/my_project \ --max-workers 4This returns the job ID, which we'll use in the next step. This doesn't yet run any jobs. It just starts a cluster that can run them. The
--max-workersargument sets the maximum number of workers for autoscaling on the new cluster.To use an existing cluster instead, pass
--cluster-id:uv run lakeflow.py create-job-from-source \ "my-lakeflow-job" \ "my-package" \ --pyproject-dir-path ~/my_project \ --cluster-id 0202-235755-w37hoxe8If the cluster is not running, it will be started automatically.
You can also create a cluster explicitly and reuse it across multiple jobs:
uv run lakeflow.py create-cluster --max-workers 4This returns a cluster ID you can pass to
create-job-from-sourceorcreate-jobvia--cluster-id.Start the job:
uv run lakeflow.py trigger-run 123456 arg11 arg12 uv run lakeflow.py trigger-run 123456 arg21 arg22 uv run lakeflow.py trigger-run 123456 arg31 arg32This starts three instances of th
Tools 3
create-job-from-sourceBuilds a wheel from source, uploads it to Databricks, and creates a job.create-clusterCreates a new Databricks cluster for running jobs.trigger-runTriggers a specific job run with provided command line arguments.Environment Variables
DATABRICKS_HOSTrequiredThe URL of the Databricks workspace.DATABRICKS_TOKENrequiredAuthentication token for Databricks API.