GCSComputeLogManager using a string or environment variable instead of passing a path to a credentials file. Thanks @silentsokolov!dagster instance migrate would run out of memory when migrating over long run histories.dagster_aws.s3.sensor.get_s3_keys that would return no keys if an invalid s3 key was providedmy_log.info("foo %s", "bar") would cause errors in some scenarios.build_assets_job. The asset graph shows each node in the job’s graph with metadata about the asset it corresponds to - including asset materializations. It also contains links to upstream jobs that produce assets consumed by the job, as well as downstream jobs that consume assets produced by the job.load_assets_from_dbt_project and load_assets_from_dbt_project that would cause runs to fail if no runtime_metadata_fn argument were supplied.@asset not to infer the type of inputs and outputs from type annotations of the decorated function.@asset now accepts a compute_kind argument. You can supply values like “spark”, “pandas”, or “dbt”, and see them represented as a badge on the asset in the Dagit asset graph.Changed VersionStrategy.get_solid_version and VersionStrategy.get_resource_version to take in a SolidVersionContext and ResourceVersionContext, respectively. This gives VersionStrategy access to the config (in addition to the definition object) when determining the code version for memoization. (Thanks @RBrossard!).
Note: This is a breaking change for anyone using the experimental VersionStrategy API. Instead of directly being passed solid_def and resource_def, you should access them off of the context object using context.solid_def and context.resource_def respectively.
emr_pyspark_step_launcher to fail when stderr included non-Log4J-formatted lines.applyPerUniqueValue config on the QueuedRunCoordinator to fail Helm schema validation.@asset decorator and build_assets_job APIs to construct asset-based jobs, along with Dagit support.load_assets_from_dbt_project and load_assets_from_dbt_manifest, which enable constructing asset-based jobs from DBT models.[dagstermill] You can now have more precise IO control over the output notebooks by specifying output_notebook_name in define_dagstermill_solid and providing your own IO manager via "output_notebook_io_manager" resource key.
We've deprecated output_notebook argument in define_dagstermill_solid in favor of output_notebook_name.
Previously, the output notebook functionality requires “file_manager“ resource and result in a FileHandle output. Now, when specifying output_notebook_name, it requires "output_notebook_io_manager" resource and results in a bytes output.
You can now customize your own "output_notebook_io_manager" by extending OutputNotebookIOManager. A built-in local_output_notebook_io_manager is provided for handling local output notebook materialization.
See detailed migration guide in https://github.com/dagster-io/dagster/pull/4490.
Dagit fonts have been updated.
context.log.info("foo %s", "bar") would not get formatted as expected.QueuedRunCoordinator’s tag_concurrency_limits to not be respected in some casestags argument of the @graph decorator or GraphDefinition constructor. These tags will be set on any runs of jobs are built from invoking to_job on the graph.k8s_job_executor or celery_k8s_job_executor. Use the key image inside the container_config block of the k8s solid tag.jobs argument. Each RunRequest emitted from a multi-job sensor’s evaluation function must specify a job_name.