How to Use Stored Procedure with Parameter in Copy Activity in Azure Data Factory

 Issue: How to Use Stored Procedure with Parameter in Copy Activity in Azure Data Factory.


In this article, we are going to learn how to use stored procedures with parameters in copy activity in Azure Data Factory, lets start our demonstration.

Open the Azure data factory, then go to the Author tab and click on + Sign to create a new pipeline, then find and drag the lookup activity.


Go to the settings tab and then click on the + New button to create a new Source dataset.


Then select Azure SQL Database and then click on continue.



Name the Dataset, then select the linked service if you have already created, otherwise create a new linked service, and then select none for import schema and then click ok.



Then write the query from where we will read the files.



Go to the pipeline again and find and bring another Foreach activity and connect with the Lookup Activity.


Click on the foreach activity and then go to the settings tab and then add the lookup expressions in the description bar.


Click on the lookup then write the .value and then click on ok.

Then click on the pencil sign on the foreach loop activity and then go inside and add Copy activity inside.


Find and bring the copy data activity inside the foreach loop activity.


Click on the copy data activity, then go to the source tab, and then click on the + New button to create a new source dataset.


Select the Azure SQL database and then click on continue.


Name the source dataset, then select the linked service, select none for import schema and then click on ok.


Click on the stored procedure and then in the dropdown select you stored procedure.


Click on the Import parameter and in the value bar add the dynamic content which is coming from our foreach loop activity.


Click on ForEach iterator then add .RegionName and click on ok.





Then go to the sink tab and click on the + New button to create a new sink dataset where we will write the data,




Select Azure SQL database and click on continue.


Name the dataset, then select the linked service, select none for import schema and then click on ok.



Click on the Open button and then select the file name in which we have to write the data.


Click on the debug and it will read the data and write to our selected location.



Video Demo: How to Use Stored Procedure with Parameter in Copy Activity in Azure Data Factory


How to Load Multiple CSV Files to Multiple Tables According to File Name in Azure Data Factory

Issue: How to Load Multiple CSV Files to Multiple Tables According to File Name in Azure Data Factory

In this article we are going to learn how to load multiple .csv files to different tables according to the file name in the Azure data factory, Each file will be loaded to a separate table according to File Name. If the table already exists, then it will be loaded without dropping and recreating. IF a table does not exist, then it will create a new table and then load the data from CSV File.

How to create a pipeline:

Go to the Azure Data Factory, then click on Author tab, then click on + Button to create a new pipeline.



Then find and drag the Meta Data Activity.


Click on the Get metadata activity, then go to the Dataset, and click on the + New button to create a new dataset.



Find and select the Blob storage and then click on continue.


Then select the file format which is .CSV and then click on continue.

Name the dataset then select the linked service if you already created, otherwise create a new linked service and select, then select the file path from where you have to input your files, then select the none for import schema and then click on ok.




Click on field list, then select the child items to get the list of files.



Now find and drag the foreach loop activity, then connect it with the Get Metadata activity.


Click on Foreach loop activity, then go to the settings tab and then click on Add dynamic content.



Click on the Getmetdata to add the expressions, then after output add .childitems and then click on finish.


Click on the pencil sign and it will bring you inside the Foreach loop activity.



Inside the Foreach loop activity, find and bring the copy data activity, then go to the source tab and then click on the + New button to create a new dataset.


Select the Azure blob storage and then click on continue.


Then select the CSV file as file format and then click on continue.


Name the dataset, then select the linked service which we have selected earlier, then select file path, then select none for import schema, and then click on ok.




Inside the Source tab click on the open button then create a parameter and then select that parameter in the file name tab.


Now inside the source tab click on the value tab to add the dynamic content.


Then go to the sink tab and click on the + New button to create a new sink dataset.


Then select the Azure SQL Database and click n continue.


Name the sink dataset, then select the linked service, if already created or create a new linked service then select none for import schema and then click on ok.



Click on the Open and then create a parameter for the table name which will come from our foreach loop.


Now go to the file name tab then click on the Add dynamic content and then select the dynamic content and click on finish



Then select none as we don't have any store procedure, then click on Auto-create table.



Then go back to the pipeline and Debug.




Video Demo: How to Load Multiple CSV Files to Multiple Tables According to File Name in Azure Data Factory.


How to Use Azure Key Vault Secrets in Azure Data Factory for SQL Connection and Blob Connection- ADF Tutorial 2021

 Issue: How to Use Azure Key Vault Secrets in Azure Data Factory for SQL Connection and Blob Connection.


In this article, we are going to learn How to use Azure Key vault secrets in Azure Data Factory, for SQL connection and Blob connection, First of all, we learn how to create an Azure key vault.

How to Create Azure Key vault.

 Open the Azure portal and then go to the key vaults, you can find it at the search bar on the top of the Azure portal, then click on the + Create button to create a new Key Vault.


Select the Azure subscription, then select your resource group, provide a name to the key vault, select your region, select pricing tier, then click on Review + create and then click on create.


Once our key vault is created, now go to the storage account, then click on Access keys, then copy the connection string key.



Now go to our Azure key vault, then click on secrets, and then click on + Generate/Import. 


Select the upload option, then provide a name, then paste the access key which we copied from our storage account, and then click on create.


Next, we have to get the connection string from our Azure SQL database, so go to the resource group, then go to the server, then go to the database and then click on the connection strings under the security tab.


Copy the connection strings key, paste it to a notepad file, provide the password in the key, and then copy the key again.


Next, go to our Azure key vault, then click on Secrets under the security tab, and then click on + Generate/Import button.


Select the upload option, provide a name, paste the access key that we copied from our connection strings, and click on create.


Next, go to the SQL server and go to your SQL databases, then select your database and provide the username and password then connect, and then create a new table.


Then click on the Firewalls and virtual networks, then Allow Azure services and resources to access this server, and click on save.


Next, open your Azure data factory studio, then go to the author tab and then click on + button, and then click on the new pipeline.



Find and drag the copy data activity in the working window, then go to the source, then click on the +New button to create a new source dataset.


Select Azure SQL Database then click on continue.


Provide the name, then click on Azure Key vault, and then it will ask for the linked service, click on + New.


Then provide a name, then select your Azure subscription, then select the Azure key vault and then click on create.


Now go to the azure key vault, select Key permissions, select secret permissions, select the certificate permissions, select the Azure data factory as principal, and then click add.


Once we add the access key policy in the Aure key vault, go back to our linked service and now select the secret name, then test the connection and then click on create.


Then select the table name and click on ok.


Now our source dataset is completed and we need to create a sink dataset, that we can copy the data from our source, to create a sink dataset, go to the sink tab then click on the + New button.


Select Azure blob storage and click on continue.


 Click the CSV file format, then click on continue.


Then it will ask for the linked service, click on + New, then provide a name, then select Azure key vault, then select linked service which we created before, then select the secret name, then test the connection, and click on create.


Once our linked service is created select the container where we will copy our data and then click on ok.


In the sink dataset click on open and provide a name for our file.


As our pipeline is ready click on debug, and then your file/files will be copied.




Video Demo: How to Use Azure Key Vault Secrets in Azure Data Factory for SQL Connection and Blob Connection