tag_concurrency_limits key that allows you to specify limits on the number of ops with certain tags that can be executing at once within a single run. See the docs for more information.ExecuteInProcessResult, the type returned by materialize, materialize_to_memory, and execute_in_process, now has an asset_value method that allows you to fetch output values by asset key.AssetIns can now accept Nothing for their dagster_type, which allows omitting the input from the parameters of the @asset- or @multi_asset- decorated function. This is useful when you want to specify a partition mapping or metadata for a non-managed input.start_offset and end_offset arguments of TimeWindowPartitionMapping now work across TimeWindowPartitionsDefinitions with different start dates and times.add_output_metadata is called multiple times within an op, asset, or IO manager handle_output, the values will now be merged, instead of later dictionaries overwriting earlier ones.materialize and materialize_to_memory now both accept a tags argument.SingleDimensionDependencyMapping, a PartitionMapping object that defines a correspondence between an upstream single-dimensional partitions definition and a downstream MultiPartitionsDefinition.RUN_DEQUEUED event has been removed from the event log, since it was duplicative with the RUN_STARTING event.raise Exception() from e syntax.runK8sConfig key in the k8sRunLauncher section. See the docs for more information.securityContext can now be set in the k8sRunLauncher section of the Dagster Helm chart.EcsRunLauncher can now be configured with cpu and memory resources for each launched job. Previously, individual jobs needed to be tagged with CPU and memory resources. See the docs for more information.S3ComputeLogManager now takes in an argument upload_extra_args which are passed through as the ExtraArgs parameter to the file upload call.make_dagster_definitions_from_airflow_dags_path and make_dagster_definitions_from_airflow_dag_bag which are passed through as the ExtraArgs parameter to the file upload call.ExperimentalWarnings related to LogicalVersions to appear even when version-based staleness was not in use.load_assets_from_modules and load_assets_from_package_module utilities now will also load cacheable assets from the specified modules.dequeue_num_workers config setting on QueuedRunCoordinatoris now respected.databricks_pyspark_step_launcher will now cancel the relevant databricks job if the Dagster step execution is interrupted.databricks_pyspark_step_launcher could exit with an unhelpful error after receiving an HTTPError from databricks with an empty message. This has been fixed.execute_queries or execute_query on a snowflake_resource would raise an error unless the parameters argument was explicitly set.EcsRunLauncher when launching many runs in parallel. Previously, each run risked hitting a ClientError in AWS for registering too many concurrent changes to the same task definition family. Now, the EcsRunLauncher recovers gracefully from this error by retrying it with backoff.make_dagster_definitions_from_airflow_dags_path and make_dagster_definitions_from_airflow_dag_bag for creating Dagster definitions from a given airflow Dag file path or DagBagUPathIOManager, thanks @danielgafni!FakeS3Session now includes additional functions and improvements to align with the boto3 S3 client API, thanks @asharov!BranchingIOManager to model use case where you wish to read upstream assets from production environments and write them into a development environment.create_repository_using_definitions_args to allow for the creation of named repositories.DbtManifestAssetSelection, which allows you to define selections of assets loaded from a dbt manifest using dbt selection syntax (e.g. tag:foo,path:marts/finance).@repository, replacing them with Definitions.Definitions is no longer marked as experimental and is the preferred API over @repository for new users of Dagster. Examples, tutorials, and documentation have largely ported to this new API. No migration is needed. Please see GitHub discussion for more details./locations.TimeWindowPartitionMapping now accepts start_offset and end_offset arguments that allow specifying that time partitions depend on earlier or later time partitions of upstream assets.dagit can now accept multiple arguments for the -m and -f flags. For each argument a new code location is loaded.build_schedule_from_partitioned_job now execute more performantly - in constant time, rather than linear in the number of partitions.QueuedRunCoordinator now supports options dequeue_use_threads and dequeue_num_workers options to enable concurrent run dequeue operations for greater throughput.load_assets_from_dbt_project, load_assets_from_dbt_manifest, and load_assets_from_dbt_cloud_job now support applying freshness policies to loaded nodes. To do so, you can apply dagster_freshness_policy config directly in your dbt project, i.e. config(dagster_freshness_policy={"maximum_lag_minutes": 60}) would result in the corresponding asset being assigned a FreshnessPolicy(maximum_lag_minutes=60).DAGSTER_RUN_JOB_NAME environment variable is now set in containerized environments spun up by our run launchers and executor.make_dagster_repo_from_airflow_dags_path ,make_dagster_job_from_airflow_dag and make_dagster_repo_from_airflow_dag_bag have a new connections parameter which allows for configuring the airflow connections used by migrated dags.Fixed a bug where the log property was not available on the RunStatusSensorContext context object provided for run status sensors for sensor logging.
Fixed a bug where the re-execute button on runs of asset jobs would incorrectly show warning icon, indicating that the pipeline code may have changed since you last ran it.
Fixed an issue which would cause metadata supplied to graph-backed assets to not be viewable in the UI.
Fixed an issue where schedules often took up to 5 seconds to start after their tick time.
Fixed an issue where Dagster failed to load a dagster.yaml file that specified the folder to use for sqlite storage in the dagster.yaml file using an environment variable.
Fixed an issue which would cause the k8s/docker executors to unnecessarily reload CacheableAssetsDefinitions (such as those created when using load_assets_from_dbt_cloud_job) on each step execution.
[dagster-airbyte] Fixed an issue where Python-defined Airbyte sources and destinations were occasionally recreated unnecessarily.
Fixed an issue with build_asset_reconciliation_sensor that would cause it to ignore in-progress runs in some cases.
Fixed a bug where GQL errors would be thrown in the asset explorer when a previously materialized asset had its dependencies changed.
[dagster-airbyte] Fixed an error when generating assets for normalization table for connections with non-object streams.
[dagster-dbt] Fixed an error where dbt Cloud jobs with dbt run and dbt run-operation were incorrectly validated.
[dagster-airflow] use_ephemeral_airflow_db now works when running within a PEX deployment artifact.
DefinitionsDefinitions. Any content not ported to Definitions in this release is in the process of being updated.__repository__ is used for a repo, only the code location name will be shown. This change also applies to URL paths.load_asset_value to error with the default IO manager when a partition_key argument was provided.context.partition_key or context.asset_partition_key_for_output when invoking an asset directly (e.g. in a unit test) would result in an error. This has been fixed.RetryRequested when using a retry policy.sqlite3.ProgrammingError error was raised when creating an ephemeral DagsterInstance, most commonly when build_resources was called without passing in an instance parameter.duckdb_pyspark_io_manager helper to automatically create a DuckDB I/O manager that can store and load PySpark DataFrames.8.0.31 would raise an error on some run queries.Definitions entrypoint for tools and the UI has been added. A single Definitions object per code location may be instantiated, and accepts typed, named arguments, rather than the heterogenous list of definitions returned from an @repository-decorated function. To learn more about this feature, and provide feedback, please refer to the Github Discussion.make_slack_on_freshness_policy_status_change_sensor allows you to create a sensor to alert you when an asset is out of date with respect to its freshness policy (and when it’s back on time!)dagstermill guide and reference page https://docs.dagster.io/integrations/dagstermilldagster-snowflake guide: https://docs.dagster.io/integrations/snowflakejob_name parameter to InputContextexecute_in_process on a GraphDefinition (it would use the fs_io_manager instead of the in-memory io manager)/instance URL path prefix has been removed. E.g. /instance/runs can now be found at /runs./workspace URL path prefix has been changed to /locations. E.g. the URL for job my_job in repository foo@bar can now be found at /locations/foo@bar/jobs/my_job.dagstermill.yield_event.save_on_notebook_failure parameter.use_ephemeral_airflow_db which will create a job run scoped airflow db for airflow dags running in dagsterAssetKeys.1.0.16 for some compute log managers where an exception in the compute log manager setup/teardown would cause runs to fail.prefix argument to prevent badly constructed paths.tag:. This resolves an issue where retrieving the available tags could cause significant performance problems. Tags can still be searched with freeform text, and by adding them via click on individual run rows.get_output_asset_key on the IOManager handling the output. Previously, this was experimental and undocumented.log property, which log events that can later be viewed in Dagit. To enable these log views in dagit, navigate to the user settings and enable the Experimental schedule/sensor logging view option. Log links will now be available for sensor/schedule ticks where logs were emitted. Note: this feature is not available for users using the NoOpComputeLogManager.