Jump to content

How to Leverage Microsoft Fabric for Data Science


Recommended Posts

Guest bethanyjep
Posted

546x299vv2.png.c9aa232da97db622ae1ac847cf306e25.png

As a Data Scientist, your role is to extract insights from data to train machine learning models which will help you in either prediction or classification scenarios. Over time, there have been various tools in which you can use to clean your data, train and deploy your models. In this blog post, I will be guiding you on how you can get started as a Data Scientist with Microsoft Fabric.

 

Why Microsoft Fabric?

 

Your question might be: why Microsoft Fabric? Microsoft Fabric is a unified platform that not only allows you to perform you data science workloads but also a platform to interact with other data specialists. It is a comprehensive solution containing: data, notebooks and models, all in one place. Making it easier for collaboration with others as well as providing a seamless and efficient experiment.

 

 

 

To try out Microsoft Fabric for free, head over to Power BI

 

 

 

Milestone 1: Load the diabetes dataset from Azure Open Datasets

 

Azure Open Datasets provides us with a wide range of datasets you can use to explore, in this post we will focuse on the Diabetes dataset. The dataset contains 422 patient samples with 10 different features to determine a quantitative measure of the disease progression. Examples of the features include: age in years, sex, body mass index (BMI), blood pressure (bp) and six serum measurements.

 

As we can load our dataset directly to our notebook, we will not need to create a Lakehouse. First we need to create a new Notebook, to do so head over to Microsoft Fabric, on the bottom left, click on the Fabric Logo. A new sidebar will pop up, select Data Science. Lastly, click on Notebook and create a new Notebook.

mediumvv2px400.png.388cddc198b00b61f0b80a31fff13972.png

 

 

In our newly created Notebook, we will go ahead and load our dataset using pyspark as provided in the Azure Open Datasets. Using the code, we read the data from Azure blob storage as a parquet file, then load the first ten rows of our dataset as follows:

mediumvv2px400.png.f249bc9070a86cbaf9eaff295e20438d.png

 

 

Milestone 2: Clean and Transform you data in Notebook

 

Once our dataset is loaded the next step is to clean and transform our data. This is essential not only in removing outliers and missing values but also ensuring that our model accuracy is improved. First we will convert our dataset to a pandas dataframe to make it easier to analyze using the code below:

mediumvv2px400.png.8055e6801e9933bb8c9e141535164520.png

Using df.info() we can conclude that our data does not contain any missing data, as shown below:

 

mediumvv2px400.png.cce572783a3c0b3517881e099c0c7a8f.png

 

 

To further understand our model, we can create a visual correlation matrix using seaborn. A correlation matrix shows the relationship between different columns. Other than AGE, S1 and S2, the rest of the values have a high correlation with our target variable as shown below:

mediumvv2px400.png.74d515fdd74d379833220328382a2853.png

 

 

Additionally, you can go ahead and perform more analysis and transformation tasks such as checking for class imbalance using value_counts(), or check for outliers by using box plots or scatter plots.

Milestone 3: Train your model using mlflow and Logistic Regression.

 

Next step is training our model, first we define our features and targets then split our data into train and test sets. Why we do this is to evaluate the performance of our model using new, unseen data. To do this, you can use the code as follows:

mediumvv2px400.png.37d31aa245ee481d198ddd318939c9e3.png

 

 

In this demo, we will use mlflow to track our experiments and save our models. Mlflow is an open source platform to manage our machine learning lifecycle by providing tools such as experiments. Experiments are a set of runs that you can use to compare different models and can be managed using mlflow. Our model will be trained on the diabetes data and is what we will use to make our predictions and can be evaluated and deployed.

We will first need to create an experiment using mlflow before we train our models, this can be achieved by the code below:

mediumvv2px400.png.20d358ce2cfe76cc947f2c3a29ce124f.png

 

 

Next, we use mlfow to train and log the metrics for a logistic regression model as shown below:

mediumvv2px400.png.a81e8843f4742df6c409fa06f296c09a.png

The above code trains a linear regression model on our dataset and logs the model, metrics, and parameters to mlflow. First, it creates a new instance of the `Linear Regression` model and trains the model on the training data. Then, it makes predictions on the test data and calculates the mean squared error and R-squared metrics. These metrics are logged using: `mlflow.log_metrics()`, and the trained model is saved as an artifact using `mlflow.sklearn.log_model()`.

 

 

Finally, the model is registered with a name in using `mlflow.register_model()`. The output prints out the model URI along with a message indicating that the training run is complete.

 

Milestone 4: Evaluate your model

 

It is essential to evaluate our models to assess its effectiveness. To evaluate the performance of our model, we used two metrics: mean square error (MSE) and R-squares(R2) score. As we have logged the metrics to Microsoft Fabric using mlflow, we will use the UI to navigate and obtain the metrics. The MSE measures the average squared difference between the predicted and actual values. On the other hand, the R2 score measures the proportion of variance in the target variable that is explained by the model.

 

To get our model scores, we first head back to our workspace and then select filter. Using filter, select Experiment and Model. We will first explore our experiment then model.

mediumvv2px400.png.7f0e4a08b7d27be98b995b16f9be45d5.png

Under our experiment, we can see our model has been logged as well as the metrics. Our mse is 2908.55 and our r2 is 0.45. Additionally, we have our model files ready to be deployed and consumed.

 

mediumvv2px400.png.b22c896f0253f469d021b24a2a15bb4a.png

 

 

Conclusion

 

In conclusion, Microsoft Fabric is a powerful platform that unifies the data science process and allows for easy collaboration with other data specialists. By following the milestones outlined in this blog post, you can kickstart your data science journey and leverage the platform's capabilities to clean your data, train and deploy models.

 

Curious to learn more on Data Science in Microsoft Fabric? You can utilize the resources below:

 

Continue reading...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.

Guest
Reply to this topic...

×   Pasted as rich text.   Paste as plain text instead

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...