How to Use Switch Activity in Azure Data Factory - Azure Data Factory Tutorial 2021

 Issue: How to Use Switch Activity in Azure Data Factory.


In this article, we are going to learn How to use Switch Activity in Azure Data Factory, Azure Data Factory Tutorial 2021. The switch activity provides the same functionality, that the switch statement provides in a programming language, it evaluates the set of activities corresponding to the case that matches the condition evaluation. 

Let's start our demonstration, first of all, open the Azure data factory studio, and click on the pipelines, and then click on the new pipeline,  search for the switch activity in the activities bar and then drag into the working window, click on the switch activity and go to the General tab, Name the switch activity whatever you like, then click on activities, in the activities bar we have to write the expressions, click on Ass dynamic content, then click on pfile, it will add the parameter in the above window,  and then click ok. s shown in the picture below.

Fig-1: Add expressions to the switch activity.


Now in the activities bar, we are going to start adding the case statement, click on + Add Case, then give a name to the case, in my case, it is ''Customer'', then go to the working window then inside the Customer and bring the wait activity inside the customer,  and also name this activity too, in my case it is ''Customer Wait Activity'', then go back to the activities and add another case, and give the name, in my case, it is ''Product'', then click on the case and inside the product bring the wait activity, and nave the activity, in my case it is ''Product Wait Activity'', and finally click on the default case, and click on it and bring the Wait activity inside the default case and name it, in my case, it is ''On Default Delete file'', as shown in the picture below.


Fig-2: Add case statements in the switch activity.

Once we are done with our switch activity, go on the top and click on debug, once you click on debug it will ask for the file which you want to debug, In my case, I want to debug the customer file, then click ok and it will run the debug process, once the debug will be completed you can see the status which file the activity taken. as shown in the picture below.


Fig-3: Debug results of the wait activity

As you can see in the above picture the activity turns up the ''Customer wait activity'' as we provide the value for the parameter.  


Video Demo: How to Use Switch Activity in Azure Data Factory

How to use Private Endpoint for Azure SQL Database from Azure Data Factory - ADF Tutorial 2021

 Issue: How to use Private Endpoint for Azure SQL Database from Azure Data Factory - ADF Tutorial 2021.


In this article, we are going to learn, how to create a private endpoint from Azure Data Factory for Azure SQL database, A private endpoint is a network interface that uses a private IP address from your virtual network. This network interface connects you privately and securely to a service powered by Azure Private Link. By enabling a private endpoint, you're bringing the service into your virtual network. 

Let's start our demonstration, open your Azure portal, and then go to the Azure Data Factory Studio, in the ADF go to the connections and click on Linked services, and then click on + New,  it will open and window at the right side of the dashboard, select Azure SQL database then click on continue, on the next page, first of all, provide the linked service name, then click on connect via Integration Runtime, as we have to create a new integration runtime,   click on +new, then select azure and click on continue, then select Virtual network configuration preview, enable that and click on create, then select the server name, then select database name, and then click on Manage Private Endpoint, and click on Create New, and then give a specific name to the endpoint and select Account Selection Method, select From Azure subscription, then select your server name and then click on create. Once you click on that the private endpoint status will be shown as Provisioning, to approve this request, go to the Azure portal then go you your SQLBD and then click on Private endpoint connections, in the connection you will see a request for accessing the database, so you have to select that request and click on approve, once you approve the request go back to your linked service page and you will see the private endpoint connection status will be approved, now select the authentication type, username, and password, test the connection and click on create, as shown in the picture below. 


Fig-1: Create a new Linked service for Private Endpoint Connection.

Once our linked service is created, let's go to the Azure portal and then go to the Azure SQL server, and then select the Firewalls and virtual networks, and set the Deny public network access as ''YES''.


Fig-2: Firewalls & Virtual Network settings.


Once you finish your Firewalls & Virtual Network settings, now your Private endpoint is created and you can connect to your Azure SQL database by using a private endpoint connection. 


Video Demo: How to use Private Endpoint for Azure SQL Database from Azure Data Factory



How to Manage the Size of the SSISDB (Catalog Database) SSIS Tutorial 2021

 Issue: How to Manage the Size of the SSISDB (Catalog Database) SSIS Tutorial.

In this article, we are going to learn how to manage the size of the SSISDB (Catalog Database), as you guys know that we are deploying our packages to SSISDC nowadays, we are not deploying the packages to the file system anymore, so we have to take care of this database, because a lot of logins is created so there is so many information when we enable this SSISDB logs, so on each and every execution there are logs produced, that's why the database can grow huge, so we have to reduce the database by following some steps, first of all, we have to see the status of our some of the properties by following this query as shown in the picture below.


Fig-1: Query to see the status of some properties.

As you can see in the above picture that we used a query and the results are Operation_Cleanuo_enabled it is set to True, Retention_Window it is set to 365, Version_Cleanup_Enabled it is set to True, so that means I have already a maintenance job, let's create a maintenance job, we can create this job by using a query and also we can do by using Graphic user interface, I will explain the both first of all we are doing this by using the graphic user interface, right-click on the SSISDB under the Integration Services Catalog, then go to the properties, inside the properties click on the clean log periodically under the Operation log tab, then set it to ''YES'', then click on retention period tab and set it to ''90'' that means we want to keep the log file only for 90 days or you can select as per you requirement, then click on Maximum number of version per project and set it to ''5'' then click on periodically delete old versions under the project versions tab and set it to ''True'', and then click ok as shown in the picture below.


Fig-2: Maintenance job for SQL Server Integration Services Database.

The second method for creating the maintenance job is by query, you can create the job by using the below query.

EXEC catalog.configure_catalog VERSION_CLEANUP_ENABLED, TRUE

EXEC catalog.configure_catalog OPREATION_CLEANUP_ENABLED, TRUE

EXEC catalog.configure_catalog RETENTION_WINDOW, 90




Video Demo: How to Manage the Size of the SSISDB (Catalog Database) SSIS Tutorial 2021 



How to Prevent Creation of Empty Files through Azure Data Factory ADF Tutorial 2021

Issue: How to prevent the creation of empty files through Azure Data Factory.


In this article, we are going to learn how to prevent the creation of empty files through Azure Data Factory, let's start our demonstration, go to the Azure data factory and click on the author and then click on Pipelines and select new pipeline, then name the pipeline and drag the copy data activity from activities tab, click on the copy data activity and click on the source, here we have to create and new source data set, to create new source data set click on + New button then select Azure SQL Database as connection and then click on continue, it will move to another window where we have to create a new linked service so click on +New button and then Name the linked service, select the Azure subscription, Authentication type, user name & Password, test the connection and then click on create, as shown in the picture below.


Fig-1: Linked service for Source dataset.

Once our linked service is created, select the table from where we need to read the data, then select None for import schema, and then click ok. 

Now as we are done with our Source data set let's click on the sink tab and create a sink dataset where we write the data, click on the + New button then select Azure Blob Storage, and click on continue,  select the delimited text which is CSV file, then click on continue, now the next window will ask for the linked service, so we have to create a new linked service which will connect with the output folder where we want to write the data, click on the + New button and then name the linked service, select the Azure subscription, Storage account name, test the connection and click on create. as shown in the picture below.


Fig-2: Linked service for Sink dataset.

Once our linked service is created select the input folder as the sink folder where our data will be written, select the first row as header, then select none for import schema and click ok.

Once our sink dataset is created, just click on Debug, once debug process will be completed, go to your input folder, and your desired file will be there, now go back to our SSMS and delete the records from our file, and again click on debug, and this time it will overwrite and make the file empty, so in this scenario what we have to do we have to write logic and make a condition if there is no data in the file it will be notified and should not create any file.

Now go to the azure data factory and search and drag lookup, from the activities tab, click on lookup activity and go to the settings tab and create a new source dataset, select the Azure SQL Database as connection, click on continue and it will ask for the linked service, select the linked service which we have created already, select the table which data coming from, select none as import schema, and click ok, then checkmark the ''First row only'' box and then go to the activities tab and drag the if condition activity and place between the Lookup activity & Copy data activity, then click on If condition activity and go to the activities, and click on add dynamic content, then select the Contains function and then inside the contains function to select the lookup and then insert the first row, as shown in the picture below.


Fig-3: Create dynamic content in the if condition activity.

Once our If Condition activity is configured right click on copy data activity and Select cut, then click on True in If Condition Activity and open it and paste the copy data activity there, and then back to the main window. and Click on Debug, once debug process is completed, let's take a look at the output folder of lookup activity, as shown in the picture below there is no file or data copied.



Fig-4: Debug completion report

Now let's go to SQL Server Management Studio and insert some records in our table, once we insert the records go back to our Azure data factory and click on ''debug'', once our debug process is completed, this time it will read the data from the source and write it in the input folder because our file has the records, as you can see in the picture below.


Fig-5: Debug completion report.


 

Video Demo: How to Prevent Creation of Empty File through Azure Data Factory


What are Linked Services and DataSets in Azure Data Factory - ADF Tutorial 2021

 Issue: What are Linked Services and DataSets in Azure Data Factory - ADF Tutorial 2021

In this article, we are going to learn What is Linked Services and DataSets in the Azure data factory, the Linked services are much like connection strings, which define the connection information needed for the service to connect to external resources. and the datasets represent the structure of the data within the linked data stores, and the linked service defines the connection to the data source. 

Let's start our demonstration, open your Azure portal, and first of all, we need to create an Azure data factory, so in the Azure portal, we have a services tab on the top from where we can go to the Data factories, once we click on that it will open a new page click on Create data factory then Select the subscription, Resource group, Region, Name, and Version, then click on Review + Create and then click create. as shown in the picture below.


Fig-1: Create a new Azure Data Factory.



Once the Data factory is created open your Azure Data Factory Studio, once you ADF Studio opened let's make a Linked service, we can create a linked service by using pipeline, and also we can create linked service separately in the manage tab, let's create our linked service from the manage tab, go to the Manage tab and then in the connection tab we have linked services, click on +New then Select Azure SQL database as our connection, and then click on continue, then we will move to the new page which is New linked service, give the name as you can identify your linked service, in my case I gave it ''lnkazuresqltechbrothersdb''  then select the Azure subscription, select the database, Authentication type, username & password, Test the connection and click on create. as shown in the picture below.

Fig-2: Create a new Linked service.

Once we created the above-linked service, let's create a new database and then create a new linked service to the newly created database, go to the SQL Server Management Studio, and create a new database, in my case I have created a database named ''Sales'', and also create a table, then go to Azure Data factory Studio and create a new linked service, Click on + New, then select Azure SQL Database, then click continue, then name the database, in my case, it is ''lnksalesdb'', then select the Azure subscription, Authentication type, Username & password, test the connection and click on create, as shown in the picture below.


Fig-3: Create a new linked service.


Now we have two linked services, one is ''lnkazuresqltechbrothersdb'' and second is ''lnksalesdb'', so we can read the data from one and write to another one, now go to your blob storage in the Azure portal and click on the containers tab under the data storage, and create two new containers, one for input and another one for output, once we created the containers, go back to the Azure data factory studio, and create a new linked service, to connect the techbrotheritstorage where our containers are located,  Click on +New, then select the Azure Blob Storage as connection then click continue, and then name the linked service in my case it is ''lnktechbrotherstorage'', then select the Azure subscription then storage account and then test the connection and click create. as shown in the picture below.



Fig-4: Create a new linked service.

Now we have three linked services, and now we have to create pipelines & use them in our datasets,  go to the author in Azure Data Factory Studio, and click on pipelines, then click on the new pipeline, then click on the copy data activity and drag it in the working window,  click on the copy data activity and then click on the source, as you click on the source it will ask for the source dataset, from where we read the data, as we don't have any dataset so click on + New button to create a new dataset, once we click on New, it will ask for the data source, select the Azure SQL database as our data source, then name the dataset and select the linked service which we have created before, and then it will ask for the table, select the table which we have created before, then select import schema and then click ok.

As we have successfully created the dataset for read the data, now click on Sink and here we have to create a new dataset for write the data, so click on the + New button then select the connection source Azure SQL database, then click continue and then name the dataset, then select the linked service which we have created before, then table name and click ok, as shown in the picture below.


Fig-5: Create a new dataset.


Once we have successfully created our datasets, now go to the main window and click on ''debug'',  once the debug process will be completed it will show the results, and then you can go to your output folder and find the data files there.  

Video Demo: What are Linked Services and DataSets in Azure Data Factory





How to Load JSON File to Azure SQL Database Table in Azure Data Factory ADF Tutorial 2021

 Issue: How to Load JSON File to Azure SQL Database Table in Azure Data Factory.



In this article, we are going to learn how to load .json file to Azure SQL database table in the Azure data factory, let's start our demonstration, first of all, we have to create a SQL database, to create SQL database open your Azure portal and click on SQL database at top of the dashboard, then click on + Create button, it will open a new window, as shown in the picture below, provide the required information, which is Azure subscription, Resource group, and database name, then select the server if you have already created if not, then create a new server to click create new under the server field, once you click on that it will open a window at the left side of the dashboard, provide the required information, which is the server name, server admin login, password and location and then click ok.

  
Fig-1: Create a new SQL database.

Once the Server is created click on the Configure database where you can find scalable compute and storage options, select the CPU's & Memory, etc as per your requirement, it will show you the total cost for that, and then click apply, then click Review + Create button and then click create. as the database is in the deployment process, meanwhile click on the server name which we have created recently, click on firewalls & virtual networks under the security tab, and then click on ''Allow azure service & resources to access this server'' make it ''Yes'' and click on save. as shown in the picture below. 


Fig-2: Firewalls and virtual networks settings of the Server.

once our SQL database is created let's go to the SQL Server Management Studio, provide the Azure SQL server name and password and it will ask you for the sign-in and then click to connect with the recently created Azure SQL server, and create a new table as shown in the picture below.



Fig-3: Create a table in SQL server management studio.  

Once the table is created go to the Azure data factory and create a new pipeline, go to the pipelines and click on New pipeline, name the pipeline and search and drag Copy Data activity in the pipeline, click on copy data activity and go to the source, then create a new Source dataset, click on + New, then select Azure Blob storage, then select the file format in my case it is .JSON, then click on continue, which is shown in the picture below.


Fig-4: Create a new Source Dataset.

Then create linked service, click on new then select the Azure subscription and Storage account name and click on create, then select the file path which is input, then select the import schema, in my case it is From connectin/store, then click ok.  




Fig-5: Create a new linked service in our pipeline.

Once our source dataset is created, click on the Sink tab and create a new Sink dataset, select Azure SQL database as connection, then create a new linked service, Provide the name, Select your Azure Subscription, Database name, Authentication Type, user name & Password, as shown in the picture below, click ok and select the table and import schema and click ok, 

Fig-6: Create a linked service for our sink dataset.

Once our Sink dataset is created, go to the mapping and click on import schemas, if you want to change any column name or something you can do it otherwise just click on Debug. once debug process is completed go to the SQL Server Management Studio and run the query '' Select * From dbo.cumstomer'' and it will show the records. as shown in the picture below. 




Video Demo: How to Load JSON File to Azure SQL Database Table in Azure Data Factory


 


How to get String between two Underscores in SSIS Expressions - SSIS Real Time Example

 Issue: How to get String between two Underscores in SSIS Expressions - SSIS Real Time Example


In this article, we are going to learn how to get string between two underscores in SSIS expressions - SSIS real-time example, let's start our demonstration, we are going to get a name of a file that's between two underscores, that's going to lead us another scenario how to use these files in the same foreach loop and loading different data flows, that's a very common scenario we often getting a mix of the file and we need to load them into different data flows, let's open the Visual Studio, in my case I have Visual Studio 2017, go to the SSIS Toolbox at the left side of the dashboard, and search for foreach loop container and bring it into the working window and inside the foreach loop we are going to point where our files are, double click on the foreach loop container and go to the collection, inside the collection we have to select Foreach file Enumerator, then brows for the folder and select your file folder, then in the Files: tab we need only .txt files, and then select Name & Extention, as shown in the picture below.


Fig-1: Foreach Loop Editor configuration of collection.

Then go to the Variable Mappings, click on the variable and name the variable in my case it is ''FullFileName'', and then click ok, as shown in the picture below.


Fig-2: Foreach Loop Editor add varibale.

Once we are done with our foreach loop configuration, go to the SSIS Toolbox, and search for the Script Task and bring it into the foreach loop window, then bring the data flow task under the script task, then bring the second data flow task and then bring the third data flow task and connect all the data flow tasks with script task nd rename the data flow task with you file names, as shown in the picture below. 

Fig-3: Bring the data flow task under the Script task.

The next step is to write our expressions, go to the folder which you have your files and copy the file name of anyone, then paste it into the value column, and then create another variable, and I will name this variable as ''FileNameOnly'', and the data type will be string, then write the expressions by clicking the ellipsis button in front of our new variable, first of all, we need to use a substring, we need to get the data in the substring, then give the file name, and then mention from where I want to start my string, we have used another function which is FindString, so bring it in the expression, and inside the find string we have to describe from where we want to read our file name, etc, as shown in the picture below, once we write our expression then click ok.


Fig-4: write the expressions for our variables.
     

Once we are done with our expressions, now come back to our working window and then double click on our first data flow task in my case it is Product_File, and then click on the expression and click to select the expression, then select our variable which is FileNameOnly then add == ''Product_File'' and then click ok as shown in the picture below, do the same practice with our 2 other data flow tasks, which are Customer_File and Sales_File.

Fig-5: Add expressions into data flow task.

Once we are done with Expressions selection, now right-click on the package name and Execute the package, and then you can see the execution results, if there is any problem with the expressions or file name it will show an error, if everything goes right it will show the results as shown in the picture below. 


Fig-6: Successfully completed execution process.




Video Demo: How to get String between two Underscores in SSIS Expressions






How to find out who has deleted Azure Data Factory or other Resources in Azure - Azure Tutorial 2021

 Issue: How to find out who has deleted Azure Data Factory or other Resources in Azure.

In this article, we are going to learn how to find out who has deleted the Azure data factory or other resources in the Azure. let's go to the Azure portal and take a look, open the Azure portal, and click on the list view icon on the left side, it will drop down the window and select the monitor from that window as shown in the picture below.



Once you open the monitor, then click on Activity log,  so this activity log will prove a lot of information, select the subscription which you want to see the details or logs, then you have the option Event severity in which you have for options which are Critical, Error, Warning and Informational you can select all or you can select a couple of them or one of them, it's depends on your requirement, and then you have an option Timespan, in this option, you can select the time duration, as you need the logs from last 6 hours or last 24hours or you need logs for a whole month, you can select whatever you want, and then you have to add the filter option, with this option you can add more filters ie. Resource, Resource type, Operation, Event initiated by or Event category, I have selected the resource group and how it looks like see in the picture below.



    
In the above picture, you can see the Delete data factory logs and create or update logs, if you need more information regarding any log just click on that and it will open a window on the right side and will show a summary and also it will show in the JSON file which you can see in the picture below.


It is further that if you want to search the logs by keywords like you want to see all the logs about the database, you can write in the search bar and it will bring all the logs related to the database as shown in the picture below.




Video Demo: How to find out who has deleted Azure Data Factory or other Resources in Azure

How to Convert JSON File to CSV File in Azure Data Factory - ADF Tutorial 2021

 Issue: How to Convert JSON File to CSV File in Azure Data Factory - ADF Tutorial 2021

In this article, we are going to learn how to convert JSON file to CSV file in the Azure data factory, let's start the demonstration, as we have the .JSON file in our Input container, now we have to convert this .JSON file  to .CSV file, first of all, we have to create a folder for our CSV file, let's make a folder, go to the blob storage, then click on container under the data storage tab then click on + Container button, it will open a window at the right side of the dashboard, then name the container, and then click create, in my case I have made a folder named output, s shown in the picture below. 


Fig-1: Create a folder for CSV file as output folder.

Once we created our destination folder let's go to the Azure data factory and make a new pipeline, to create a pipeline click on Pipeline, then click on New pipeline, name the pipeline and search for copy data activity and drag into the working window, then click on copy data activity and go to the source, as shown in the picture below, and here in the source click on the new source dataset, select azure blob storage and then click on continue, select the JSON file which is our input file, then click on continue, name the file, and create a new link service, click on new then select the Azure subscription and storage account name and test the connection and then click create, then select the input file path, select From connection/store as import schema, and then click ok.

Fig-2: Create source data set in copy data activity.

Click on the Sink tab and inside the sink tab we have to create a new Sink dataset, click on the + New button then select Azure blob storage and select delimited text (CSV) which is our output file format, then click on continue, and name the dataset as Output, then select the linked service, which we created earlier, then select the output folder and name the output file, select first row as header, and select ''none'' as import schema, and then click ok, as shown in the picture below.


Fig-3: Create sink data set and select linked service.


The next step is mapping, go to the Mapping tab and then click on Import Schemas, inside the schemas you can add or remove columns or change the data format if required, as shown in the picture below.


Fig-4: Configure mappings and import schemas. 


Once you are done with your selection click on Debug, and it will start the execution process, when debugging will be completed you will be notified and can see the CSV file in your output folder as shown in the picture below.


Fig-5: Output CSV file, debug process completed


Video Demo: How to Convert JSON File to CSV File in Azure Data Factory - ADF Tutorial


How to use Copy Activity to Read Json File & Limitation of Copy Activity - single array is supported ADF Tutorial 2021

Issue: How to use Copy Activity to Read Json File & Limitation of Copy Activity - the single array is supported in Azure data factory.

In this article we are going to learn how to use copy activity to read JSON file & limitation of copy activity - the single array is supported, let's start our demonstration, Open Azure data factory studio, then click on Pipeline and then select New pipeline on the left side of the dashboard, bring the copy activity in the working window, then click on copy activity then go to the source tab and select the +New in source dataset option, then search for azure blood storage and select, then click on continue, then select JSON, then click continue as shown in the picture below.


Fig-1:Create a new pipeline.

Then it will ask for linked service, click + New, it will open a window, select the Azure subscription and then storage account name, and then click on create as shown in the picture below. 



Fig-2: Create a new linked service.

In the dataset, properties select the path as input folder, then select from connection/store as import schema and click ok as shown in the picture below. 



Fig-3: Create a new source dataset.

Once our source dataset is created go to the sink and create our CSV file, click on + New then select Azure blob storage, then click continue, then select ''delimited text'' which is CSV format, then click on continue, then click ok and select the linked service and provide the output path and click none as Import schema, and then click ok, as shown in the picture below. 



Fig-4: Create new dataset & import schema.

Then click on Sink and provide the output path which his ''OutputJson.csv'',  then go back to your pipeline and click on the mapping tab, then click on Import Schemas, then click on debug, once you click on debug it will start the execution process and it will take some time to be completed. after completion, you can see your executed file in your output folder, the path we selected for our output file.

Video Demo: How to use Copy Activity to Read Json File & Limitation of Copy Activity




How to Read Json File with Multiple Arrays By using Flatten Activity ADF Tutorial 2021

 Issue: How to Read JSON File with Multiple Arrays By using Flatten Activity.

In this article we are going to learn How to Read JSON Files with Multiple Arrays By using Flatten Activity in the azure data factory, let's start our demonstration, open your Azure data factory click on Pipeline and then select New pipeline on the left side of the dashboard. name the pipeline and then bring the data flow task in the working window, double click on the data flow then click on data flow debug, it will ask you for the debug time to live, in my case I am selecting 1 hour to debug time to live, and then click ok. 

Once we are ok with our debug settings now double click on the data flow and then click on the settings, in the settings tab and then click on the new, once we click on new it will open the editor so we can work on the data flow, click on ''add source'', then go to the source setting in the source setting we have data set option, click on + new button then it will open a new window in this window we have to create new data set, select azure blob storage then click on next, in the blob storage select the format as JSON and then click on continue.

Once you click on continue it will ask for a linked service, so we have to create a new linked service, click on + New then it will open a new window as shown in the picture below, select the Azure subscription and storage account name then click on test connection and finally click on Create. 


Fig-1: Create a new linked service.


Once our linked service is created, we have to select out input file in the set properties window and then click on ok. here I want to share one of the error which I have faced, once our data flow debug is ready click on the refresh button in the data preview tab to see the error which I am talking about, once I click on refresh it brings me an error which is shown in the picture below.


 Fig-2: Got an error during the process.



Solution: 
To resolve this error Click on the source option and then go to the JSON settings, inside the JSON settings, select the option ''Array of document'' which is shown in the picture below.


Fig-3: Solution of Error.


Once you select Document form as ''Array of document'' then go to the data preview tab and then click on refresh it will show the data without any error as shown in the picture below.


Fig-4: Successfully preview the data without having any error 


As our source 1 is created successfully now click on the + button and bring the flatten activity, click on the flatten activity and it will show the settings tab, inside the settings tab select the options as per your requirement as shown in the picture below.

Fig-5: Create flatten activity.


Once we are done with our 1st flatten activity click on the + sign again and bring another flatten activity,  in the second flatten activity select ''social media'' as unroll by: and then add another column named social media link as shown in the picture below. 


Fig-6: Create another flatten activity.


Once we are done with our 2nd flatten activity now click on the + sign and bring the sink activity, inside the sink activity we are going to create a new data set, to create a new data click on +New sign then select azure blob storage click continue then select CSV file as file format, then click on continue then select the linked service and choose the output container, then select the first row as header and select none for import schema, and then click ok, now go to the settings tab and select Output to single file as file name option and provide the file name, and then go the optimize and select ''Single partition'' once we are done with our sink activity go back to our pipeline and click on debug, as you can see in the picture below our data flow task is completed successfully without having any error.



Fig-7: Data flow task completed successfully. 



Video Demo: How to Read Json File with Multiple Arrays By using Flatten Activity ADF Tutorial