Aml Pipelines With Commandstep R
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
![]()
How to use CommandStep in Azure ML Pipelines
This notebook shows how to use the CommandStep with Azure Machine Learning Pipelines for running R scripts in a pipeline.
The example shows training a model in R to predict probability of fatality for vehicle crashes.
Prerequisite:
- Understand the architecture and terms introduced by Azure Machine Learning
- If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the configuration notebook to:
- install the Azure ML SDK
- create a workspace and its configuration file (
config.json)
Let's get started. First let's import some Python libraries.
Initialize workspace
Initialize a Workspace object from the existing workspace you created in the Prerequisites step. Workspace.from_config() creates a workspace object from the details stored in config.json.
Create or Attach existing AmlCompute
You will need to create a compute target for training your model. In this tutorial, you create AmlCompute as your training compute resource.
Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.
If we could not find the cluster with the given name, then we will create a new cluster here. We will create an AmlCompute cluster of STANDARD_D2_V2 CPU VMs. This process is broken down into 3 steps:
- create the configuration (this step is local and only takes a second)
- create the cluster (this step will take about 20 seconds)
- provision the VMs to bring the cluster to the initial size (of 1 in this case). This step will take about 3-5 minutes and is providing only sparse output in the process. Please make sure to wait until the call returns before moving to the next cell
Now that you have created the compute target, let's see what the workspace's compute_targets property returns. You should now see one entry named 'cpu-cluster' of type AmlCompute.
Create a CommandStep
CommandStep adds a step to run a command in a Pipeline. For the full set of configurable options see the CommandStep reference docs.
- name: Name of the step
- runconfig: ScriptRunConfig object. You can configure a ScriptRunConfig object as you would for a standalone non-pipeline run and pass it in to this parameter. If using this option, you do not have to specify the
command,source_directory,compute_targetparameters of the CommandStep constructor as they are already defined in your ScriptRunConfig. - runconfig_pipeline_params: Override runconfig properties at runtime using key-value pairs each with name of the runconfig property and PipelineParameter for that property
- command: The command to run or path of the executable/script relative to
source_directory. It is required unless therunconfigparameter is specified. It can be specified with string arguments in a single string or with input/output/PipelineParameter in a list. - source_directory: A folder containing the script and other resources used in the step.
- compute_target: Compute target to use
- allow_reuse: Whether the step should reuse previous results when run with the same settings/inputs. If this is false, a new run will always be generated for this step during pipeline execution.
- version: Optional version tag to denote a change in functionality for the step
The best practice is to use separate folders for scripts and its dependent files for each step and specify that folder as the
source_directoryfor the step. This helps reduce the size of the snapshot created for the step (only the specific folder is snapshotted). Since changes in any files in thesource_directorywould trigger a re-upload of the snapshot, this helps keep the reuse of the step when there are no changes in thesource_directoryof the step.
Configure environment
Configure the environment for the train step. In this example we will create an environment from the Dockerfile we have included.
Azure ML currently requires Python as an implicit dependency, so Python must installed in your image even if your training script does not have this dependency.
Configure input training dataset
This tutorial uses data from the US National Highway Traffic Safety Administration. This dataset includes data from over 25,000 car crashes in the US, with variables you can use to predict the likelihood of a fatality. We have included an Rdata file that includes the accidents data for analysis.
Here we use the workspace's default datastore to upload the training data file (accidents.Rd); in practice you can use any datastore you want.
Now create a FileDataset from the data, which will be used as an input to the train step.
Now create a ScriptRunConfig that configures the training run. Note that in the command we include the input dataset for the training data.
For detailed guidance on how to move data in pipelines for input and output data, see the documentation Moving data into and between ML pipelines.
Now create a CommandStep and pass in the ScriptRunConfig object to the runconfig parameter.