Posts Tagged ‘ML Studio’

Azure Machine Learning Service Visual Interface vs Azure Machine Learning Studio

In this post I’ll compare the Azure Machine Service Visual Interface to Azure Machine Learning Studio.

If you are a fan of Azure Machine Learning Studio this should help you get started with Azure Machine Learning Service Visual Interface and understand some of the differences.

I quite like Azure Machine Learning Studio. It’s a drag and drop tool for training machine learning models and deploying them to a web service hosted on Azure. It has limitations, but it allows you to train, tweak, and retrain machine learning models without writing any code.  The tool was deprecated a couple of years ago.  Which in software terms usually means, we aren’t doing any more work on this tool and don’t recommend continuing to use it.  I think Machine Learning Workbench was supposed to replace it, but that product did not gain much traction in the market and is also deprecated.

So, I was very interested to see the Visual Interface Preview for Azure Machine Learning Service, which looks very familiar to Azure Machine Learning Studio users.

In this post I’ll talk about the difference in

  • Home Screen
  • Launching the tool
  • Datasets
  • Data prep and column selection
  • Running your experiment
  • Deploying your model

Home Screen

At first glance the only notable difference is the appearance of the +New button, and the lack of Notebooks through the interface. You can create Azure notebooks seperately, but the preview does not have integrated notebooks.

Home page Azure Machine Learning Service Visual Interface preview

Azure Machine Learning Service Visual Interface (preview)

Home Page Azure Machine Learning Studio

Azure Machine Learning Studio

Because Azure Machine Learning Service Visual Interface is such a long winded product name, I will refer to it as the ML Visual Interface from this point forward in my post. I will refer to Azure Machine Learning Studio as ML Studio.

Launching the tool

The first difference ML Studio users will notice is it’s a little more work to reach the home page pictured above in ML Visual interface. with ML Studio you could just go to the ML Studio home page and log in. The Azure back end was abstracted from the user.

To get started with ML Visual Interface you need an Azure subscription and you need to use the Azure portal . From the portal, you create a Machine Learning service workspace. If you want to try it yourself, you can find instructions in the Quickstart: Prepare and visualize data without writing code in Azure Machine Learning.

Creating Azure Machine Learning Service Workspace

Create an Azure Machine Learning service workspace

Once you have created the workspace you launch the visual interface from the menu blade

Launch Visual Interface from menu

Launch Visual Interface

Datasets

Both tools come with a variety of pre-loaded datasets. Both tools allow you to upload your own datasets. I was mildly disappointed that my favorite flight dataset is not pre-loaded in ML Visual interface. I found training a model to predict which flights were late or on time was a great example, since you don’t need to be a data scientist to understand what data might affect whether or not a flight is likely to be late.

I have uploaded the titanic dataset to try and predict which passengers would survive.

As soon as I drag the dataset to the work area I discovered the first ‘quirk’ of the preview. When I try to visualize my dataset, that option is grayed out. I am going to assume this is just one of the joys of working with a product in preview and will be fixed in a later release. But it is quite irritating, because of course I always want to look at my data set and make sure it uploaded correctly.

Data set added to work area but VIsualize menu item is grayed out

Adding data set to work area

Data prep and column selection

We still have all the great modules like Select Columns in DatasetClean Missing Data, and Edit Metadata.

Unfortunately and I will also chalk this up to quirks of previews that will be fixed in a later release, whatever module you connect first (e.g. Select Columns in Dataset) will not know the names of the columns in your dataset.  So you will want to make sure YOU know the column names because you have to type them in and the interface will not warn you if you enter an invalid column name.

Entering column names manually for select olumns in dataset does not catch invalid column names

You can specify invalid column names

Running your experiment

In ML Studio, you were unable to scale up the compute power used to run your experiment even though it was running on cloud resources. This meant it sometimes took a LONG time to run.

One of the great things about the ML Visual Interface is you control the compute power used to run your experiment! The only drawback is you have to create the compute instance before you can run your experiment. Well worth it for the benefits of being able to use more compute power when needed!

The first time you run the experiment you will need to create new compute. I am just using the preconfigured defaults for now, but I could go to the Azure portal and create a new compute target (you will need to learn to do that at some point anyway, because when you are ready to deploy it will require you to create a compute manually)

Creating compute can take 5-15 minutes, so go ahead and grab a coffee while you wait.

Create compute to run your experiment in Azurel Machine Learning Service Visual interface

Create compute to run your experiment

The good news is after the compute is created and the experiment runs, you can use the Visualize to see your modified datasets after the task (though you still can’t visualize the original dataset).  Also, any tasks you add to your experiment will now validate columns names.

Data visualization after running experiment

Visualization of Data

Training and Scoring your model

Just like ML Studio, you just drag and drop the tasks you need to prepare your data, split, train, score, and evaluate your model. Set the properties in the property window. The user experience for adding modules and setting properties is the same in ML Visual interface as ML Studio.

Finished experiment in Azure Machine Learning Service Visual Interface

Finished Experiment

When you re-run your experiment, you do not need to Create new compute every time. You can select existing and re-use the compute you created for the first run of your experiment. The first time I used the tool I had problems with my previous compute not showing up in the list of existing compute targets (even when I selected Refresh) but when I closed the browser window and re-opened it, it would show up.  Still faster than creating new Compute every time I want to run the experiment.  Today, I did not need to refresh and my compute target was listed as you would expect, so maybe they fixed it (if so well done that was quick!), or maybe it depends on your browser or maybe it’s an intermittent quirk of the preview.

re-use existing compute when rerunning experiment

re-use existing compute

Deploying your model

To deploy, you must first create and run a predictive experiment, just like ML Studio.

One of the other BIG differences in ML Visual Interface is the ability to deploy the model somewhere other than an Azure web service.  You have control. If you know your way around managing and deploying models, the world is your oyster. I have not yet had a chance to try it myself (if you try it before I do, please share your experience). Check out Deploy models with the Azure Machine Learning Service for more instructions and details.

If you just want to deploy it to a web service the way we did in ML Studio, that still works. Possible kudos to the product team for quick work, because when I tried this last week it required me to create a compute target with 12 nodes to deploy. Today I was able to re-use the existing compute I had used to run my experiment which had only 6 nodes.

Requiring 12 nodes was a hassle because my Pay as You Go Azure subscription had a quota limit of 10 nodes. I had to submit a request to increase my CPU quota. To the credit of the Azure support team (no pun intended) The request was processed almost immediately.  I was able to confirm my new quota with the Azure CLI command.

az vm list-usage –location “EastUS”

Quota limit

Quota limit

Just for extra fun, the VM family I requested a quota limit for was not one of the valid VMs supported by the tool, so I had to look up which VM family supported the VMS listed as supported in the error message to request the correct quota increase.

I did not have to do ANY of that stuff with quotas today, I just clicked DEploy web service, and re-used the existing comptue I created to run my experiment. I am leaving those links and information in the blog post, just in case anyone else runs into it and also so I can remember how to do it if I ever run into that issue again with another service.  Blogs make a handy place to keep notes for yourself.

Testing your trained model

If you deploy your trained model as a web service, you can test it and find sample code to copy and paste just like you did with ML Studio.

Select Web Services | Test

Type in your values and see the result.

TEsting the trained model Azure MAchine Learning Service VIsual Interface

Testing the trained model

This is not a secure way to deploy your model, but it’s great for proof of concept, and testing.

You will see a warning with links to instructions on secure deployment when you open the web service.

Instructions are provided for secure deployment on web service page

Secure deployment

Summary

Fans of Azure Machine Learning Studio are likely to become bigger fans of Azure Machine Learning Service Visual Interface.  Two of the biggest complaints about ML Studio were the inability to scale compute and the inability to deploy models outside of Azure web services. Both of these concerns are addressed with Azure Machine Learning Service Visual Interface.