[dagster-k8s] The k8s_job_executor is no longer experimental, and is recommended for production workloads. This executor runs each op in a separate Kubernetes job. We recommend this executor for Dagster jobs that require greater isolation than the multiprocess executor can provide within a single Kubernetes pod. The celery_k8s_job_executor will still be supported, but is recommended only for use cases where Celery is required (The most common example is to offer step concurrency limits using multiple Celery queues). Otherwise, the k8s_job_executor is the best way to get Kubernetes job isolation.
[dagster-airflow] Updated dagster-airflow to better support job/op/graph changes by adding a make_dagster_job_from_airflow_dag factory function. Deprecated pipeline_name argument in favor of job_name in all the APIs.
Removed a version pin of the chardet library that was required due to an incompatibility with an old version of the aiohttp library, which has since been fixed.
We now raise a more informative error if the wrong type is passed to the ins argument of the op decorator.
In the Dagit Launchpad, the button for launching a run now says “Launch Run” instead of “Launch Execution”
Changed the job explorer view in Dagit to show asset-based graphs when the experimental Asset API flag is turned on for any job that has at least one software-defined asset.
Updated dagstermill to better support job/op/graph changes by adding a define_dagstermill_op factory function. Also updated documentation and examples to reflect these changes.
Changed run history for jobs in Dagit to include legacy mode tags for runs that were created from pipelines that have since been converted to use jobs.
The new get_dagster_logger() method is now importable from the top level dagster module (from dagster import get_dagster_logger)
[dagster-dbt] All dagster-dbt resources (dbt_cli_resource, dbt_rpc_resource, and dbt_rpc_sync_resource) now support the dbt ls command: context.resources.dbt.ls().
Added ins and outs properties to OpDefinition.
Updated the run status favicon of the Run page in Dagit.
There is now a resources_config argument on build_solid_context. The config argument has been renamed to solid_config.
[helm] When deploying Redis using the Dagster helm chart, by default the new cluster will not require authentication to start a connection to it.
[dagster-k8s] The component name on Kubernetes jobs for run and step workers is now run_worker and step_worker, respectively.
Improved performance for rendering the Gantt chart on the Run page for runs with very long event logs.
When using multiple Celery workers in the Dagster helm chart, each worker can now be individually configured. See the helm chart for more information. Thanks @acrulopez!
[dagster-k8s] Changed Kubernetes job containers to use the fixed name dagster, rather than repeating the job name. Thanks @skirino!
[dagster-docker] Added a new docker_executor which executes steps in separate Docker containers.
The dagster-daemon process can now detect hanging runs and restart crashed run workers. Currently
only supported for jobs using the docker_executor and k8s_job_executor. Enable this feature in your dagster.yaml with:
run_monitoring:
enabled: true
Documentation coming soon. Reach out in the #dagster-support Slack channel if you are interested in using this feature.
Previously, the Dagster CLI would use a completely ephemeral dagster instance if $DAGSTER_HOME was not set. Since the new job abstraction by default requires a non-ephemeral dagster instance, this has been changed to instead create a persistent instance that is cleaned up at the end of an execution.
The job, op, and graph APIs now represent the stable core of the system, and replace pipelines, solids, composite solids, modes, and presets as Dagster’s core abstractions. All of Dagster’s documentation - tutorials, examples, table of contents - is in terms of these new core APIs. Pipelines, modes, presets, solids, and composite solids are still supported, but are now considered “Legacy APIs”. We will maintain backcompatibility with the legacy APIs for some time, however, we believe the new APIs represent an elegant foundation for Dagster going forward. As time goes on, we will be adding new features that only apply to the new core. All in all, the new APIs provide increased clarity - they unify related concepts, make testing more lightweight, and simplify operational workflows in Dagit. For comprehensive instructions on how to transition to the new APIs, refer to the migration guide.
Dagit has received a complete makeover. This includes a refresh to the color palette and general design patterns, as well as functional changes that make common Dagit workflows more elegant. These changes are designed to go hand in hand with the new set of core APIs to represent a stable core for the system going forward.
You no longer have to pass a context object around to do basic logging. Many updates have been made to our logging system to make it more compatible with the python logging module. You can now capture logs produced by standard python loggers, set a global python log level, and set python log handlers that will be applied to every log message emitted from the Dagster framework. Check out the docs here!
The Dagit “playground” has been re-named into the Dagit “launchpad”. This reflects a vision of the tool closer to how our users actually interact with it - not just a testing/development tool, but also as a first-class starting point for many one-off workflows.
Introduced a new integration with Microsoft Teams, which includes a connection resource and support for sending messages to Microsoft Teams. See details in the API Docs (thanks @iswariyam!).
Intermediate storages, which were deprecated in 0.10.0, have now been removed. Refer to the “Deprecation: Intermediate Storage” section of the 0.10.0 release notes for how to use IOManagers instead.
The pipeline-level event types in the run log have been renamed so that the PIPELINE prefix has been replaced with RUN. For example, the PIPELINE_START event is now the RUN_START event.
Addition of get_dagster_logger function, which creates a python loggers whose output messages will be captured and converted into Dagster log messages.
We have renamed a lot of our GraphQL Types to reflect our emphasis on the new job/op/graph APIs. We have made the existing types backwards compatible so that GraphQL fragments should still work. However, if you are making custom GraphQL requests to your Dagit webserver, you may need to change your code to handle the new types.
We have paired our GraphQL changes with changes to our Python GraphQL client. If you have upgraded the version of your Dagit instance, you will most likely also want to upgrade the version of your Python GraphQL client.