Auto Ml Forecasting Github Dau
Copyright (c) Microsoft Corporation. All rights reserved.
Licensed under the MIT License.
![]()
!Important!
This notebook is outdated and is not supported by the AutoML Team. Please use the supported version (link).
Introduction
This notebook demonstrates demand forecasting for Github Daily Active Users Dataset using AutoML.
AutoML highlights here include using Deep Learning forecasts, Arima, Prophet, Remote Execution and Remote Inferencing, and working with the forecast function. Please also look at the additional forecasting notebooks, which document lagging, rolling windows, forecast quantiles, other ways to use the forecast function, and forecaster deployment.
Make sure you have executed the configuration before running this notebook.
Notebook synopsis:
- Creating an Experiment in an existing Workspace
- Configuration and remote run of AutoML for a time-series model exploring DNNs
- Evaluating the fitted model using a rolling test
Setup
This notebook is compatible with Azure ML SDK version 1.35.0 or later.
As part of the setup you have already created a Workspace. To run AutoML, you also need to create an Experiment. An Experiment corresponds to a prediction problem you are trying to solve, while a Run corresponds to a specific approach to the problem.
Using AmlCompute
You will need to create a compute target for your AutoML run. In this tutorial, you use AmlCompute as your training compute resource.
Note that if you have an AzureML Data Scientist role, you will not have permission to create compute resources. Talk to your workspace or IT admin to create the compute targets described in this section, if they do not already exist.
Data
Read Github DAU data from file, and preview data.
Let's set up what we know about the dataset.
Target column is what we want to forecast.
Time column is the time axis along which to predict.
Time series identifier columns are identified by values of the columns listed time_series_id_column_names, for example "store" and "item" if your data has multiple time series of sales, one series for each combination of store and item sold.
Forecast frequency (freq) This optional parameter represents the period with which the forecast is desired, for example, daily, weekly, yearly, etc. Use this parameter for the correction of time series containing irregular data points or for padding of short time series. The frequency needs to be a pandas offset alias. Please refer to pandas documentation for more information.
This dataset has only one time series. Please see the orange juice notebook for an example of a multi-time series dataset.
Split Training data into Train and Validation set and Upload to Datastores
Setting forecaster maximum horizon
The forecast horizon is the number of periods into the future that the model should predict. Here, we set the horizon to 14 periods (i.e. 14 days). Notice that this is much shorter than the number of months in the test set; we will need to use a rolling test to evaluate the performance on the whole test set. For more discussion of forecast horizons and guiding principles for setting them, please see the energy demand notebook.
Train
Instantiate a AutoMLConfig object. This defines the settings and data used to run the experiment.
| Property | Description |
|---|---|
| task | forecasting |
| primary_metric | This is the metric that you want to optimize. Forecasting supports the following primary metrics spearman_correlation normalized_root_mean_squared_error r2_score normalized_mean_absolute_error |
| iteration_timeout_minutes | Time limit in minutes for each iteration. |
| training_data | Input dataset, containing both features and label column. |
| label_column_name | The name of the label column. |
| enable_dnn | Enable Forecasting DNNs |
We will now run the experiment, starting with 10 iterations of model search. The experiment can be continued for more iterations if more accurate results are required. Validation errors and current status will be shown when setting show_output=True and the execution will be synchronous.
Displaying the run objects gives you links to the visual tools in the Azure Portal. Go try them!
Retrieve the Best Model for Each Algorithm
Below we select the best pipeline from our iterations. The get_output method on automl_classifier returns the best run and the fitted model for the last fit invocation. There are overloads on get_output that allow you to retrieve the best run and fitted model for any logged metric or a particular iteration.
Evaluate on Test Data
We now use the best fitted model from the AutoML Run to make forecasts for the test set.
We always score on the original dataset whose schema matches the training set schema.