How to get String between two Underscores in SSIS Expressions - SSIS Real Time Example

 Issue: How to get String between two Underscores in SSIS Expressions - SSIS Real Time Example


In this article, we are going to learn how to get string between two underscores in SSIS expressions - SSIS real-time example, let's start our demonstration, we are going to get a name of a file that's between two underscores, that's going to lead us another scenario how to use these files in the same foreach loop and loading different data flows, that's a very common scenario we often getting a mix of the file and we need to load them into different data flows, let's open the Visual Studio, in my case I have Visual Studio 2017, go to the SSIS Toolbox at the left side of the dashboard, and search for foreach loop container and bring it into the working window and inside the foreach loop we are going to point where our files are, double click on the foreach loop container and go to the collection, inside the collection we have to select Foreach file Enumerator, then brows for the folder and select your file folder, then in the Files: tab we need only .txt files, and then select Name & Extention, as shown in the picture below.


Fig-1: Foreach Loop Editor configuration of collection.

Then go to the Variable Mappings, click on the variable and name the variable in my case it is ''FullFileName'', and then click ok, as shown in the picture below.


Fig-2: Foreach Loop Editor add varibale.

Once we are done with our foreach loop configuration, go to the SSIS Toolbox, and search for the Script Task and bring it into the foreach loop window, then bring the data flow task under the script task, then bring the second data flow task and then bring the third data flow task and connect all the data flow tasks with script task nd rename the data flow task with you file names, as shown in the picture below. 

Fig-3: Bring the data flow task under the Script task.

The next step is to write our expressions, go to the folder which you have your files and copy the file name of anyone, then paste it into the value column, and then create another variable, and I will name this variable as ''FileNameOnly'', and the data type will be string, then write the expressions by clicking the ellipsis button in front of our new variable, first of all, we need to use a substring, we need to get the data in the substring, then give the file name, and then mention from where I want to start my string, we have used another function which is FindString, so bring it in the expression, and inside the find string we have to describe from where we want to read our file name, etc, as shown in the picture below, once we write our expression then click ok.


Fig-4: write the expressions for our variables.
     

Once we are done with our expressions, now come back to our working window and then double click on our first data flow task in my case it is Product_File, and then click on the expression and click to select the expression, then select our variable which is FileNameOnly then add == ''Product_File'' and then click ok as shown in the picture below, do the same practice with our 2 other data flow tasks, which are Customer_File and Sales_File.

Fig-5: Add expressions into data flow task.

Once we are done with Expressions selection, now right-click on the package name and Execute the package, and then you can see the execution results, if there is any problem with the expressions or file name it will show an error, if everything goes right it will show the results as shown in the picture below. 


Fig-6: Successfully completed execution process.




Video Demo: How to get String between two Underscores in SSIS Expressions






How to find out who has deleted Azure Data Factory or other Resources in Azure - Azure Tutorial 2021

 Issue: How to find out who has deleted Azure Data Factory or other Resources in Azure.

In this article, we are going to learn how to find out who has deleted the Azure data factory or other resources in the Azure. let's go to the Azure portal and take a look, open the Azure portal, and click on the list view icon on the left side, it will drop down the window and select the monitor from that window as shown in the picture below.



Once you open the monitor, then click on Activity log,  so this activity log will prove a lot of information, select the subscription which you want to see the details or logs, then you have the option Event severity in which you have for options which are Critical, Error, Warning and Informational you can select all or you can select a couple of them or one of them, it's depends on your requirement, and then you have an option Timespan, in this option, you can select the time duration, as you need the logs from last 6 hours or last 24hours or you need logs for a whole month, you can select whatever you want, and then you have to add the filter option, with this option you can add more filters ie. Resource, Resource type, Operation, Event initiated by or Event category, I have selected the resource group and how it looks like see in the picture below.



    
In the above picture, you can see the Delete data factory logs and create or update logs, if you need more information regarding any log just click on that and it will open a window on the right side and will show a summary and also it will show in the JSON file which you can see in the picture below.


It is further that if you want to search the logs by keywords like you want to see all the logs about the database, you can write in the search bar and it will bring all the logs related to the database as shown in the picture below.




Video Demo: How to find out who has deleted Azure Data Factory or other Resources in Azure

How to Convert JSON File to CSV File in Azure Data Factory - ADF Tutorial 2021

 Issue: How to Convert JSON File to CSV File in Azure Data Factory - ADF Tutorial 2021

In this article, we are going to learn how to convert JSON file to CSV file in the Azure data factory, let's start the demonstration, as we have the .JSON file in our Input container, now we have to convert this .JSON file  to .CSV file, first of all, we have to create a folder for our CSV file, let's make a folder, go to the blob storage, then click on container under the data storage tab then click on + Container button, it will open a window at the right side of the dashboard, then name the container, and then click create, in my case I have made a folder named output, s shown in the picture below. 


Fig-1: Create a folder for CSV file as output folder.

Once we created our destination folder let's go to the Azure data factory and make a new pipeline, to create a pipeline click on Pipeline, then click on New pipeline, name the pipeline and search for copy data activity and drag into the working window, then click on copy data activity and go to the source, as shown in the picture below, and here in the source click on the new source dataset, select azure blob storage and then click on continue, select the JSON file which is our input file, then click on continue, name the file, and create a new link service, click on new then select the Azure subscription and storage account name and test the connection and then click create, then select the input file path, select From connection/store as import schema, and then click ok.

Fig-2: Create source data set in copy data activity.

Click on the Sink tab and inside the sink tab we have to create a new Sink dataset, click on the + New button then select Azure blob storage and select delimited text (CSV) which is our output file format, then click on continue, and name the dataset as Output, then select the linked service, which we created earlier, then select the output folder and name the output file, select first row as header, and select ''none'' as import schema, and then click ok, as shown in the picture below.


Fig-3: Create sink data set and select linked service.


The next step is mapping, go to the Mapping tab and then click on Import Schemas, inside the schemas you can add or remove columns or change the data format if required, as shown in the picture below.


Fig-4: Configure mappings and import schemas. 


Once you are done with your selection click on Debug, and it will start the execution process, when debugging will be completed you will be notified and can see the CSV file in your output folder as shown in the picture below.


Fig-5: Output CSV file, debug process completed


Video Demo: How to Convert JSON File to CSV File in Azure Data Factory - ADF Tutorial


How to use Copy Activity to Read Json File & Limitation of Copy Activity - single array is supported ADF Tutorial 2021

Issue: How to use Copy Activity to Read Json File & Limitation of Copy Activity - the single array is supported in Azure data factory.

In this article we are going to learn how to use copy activity to read JSON file & limitation of copy activity - the single array is supported, let's start our demonstration, Open Azure data factory studio, then click on Pipeline and then select New pipeline on the left side of the dashboard, bring the copy activity in the working window, then click on copy activity then go to the source tab and select the +New in source dataset option, then search for azure blood storage and select, then click on continue, then select JSON, then click continue as shown in the picture below.


Fig-1:Create a new pipeline.

Then it will ask for linked service, click + New, it will open a window, select the Azure subscription and then storage account name, and then click on create as shown in the picture below. 



Fig-2: Create a new linked service.

In the dataset, properties select the path as input folder, then select from connection/store as import schema and click ok as shown in the picture below. 



Fig-3: Create a new source dataset.

Once our source dataset is created go to the sink and create our CSV file, click on + New then select Azure blob storage, then click continue, then select ''delimited text'' which is CSV format, then click on continue, then click ok and select the linked service and provide the output path and click none as Import schema, and then click ok, as shown in the picture below. 



Fig-4: Create new dataset & import schema.

Then click on Sink and provide the output path which his ''OutputJson.csv'',  then go back to your pipeline and click on the mapping tab, then click on Import Schemas, then click on debug, once you click on debug it will start the execution process and it will take some time to be completed. after completion, you can see your executed file in your output folder, the path we selected for our output file.

Video Demo: How to use Copy Activity to Read Json File & Limitation of Copy Activity




How to Read Json File with Multiple Arrays By using Flatten Activity ADF Tutorial 2021

 Issue: How to Read JSON File with Multiple Arrays By using Flatten Activity.

In this article we are going to learn How to Read JSON Files with Multiple Arrays By using Flatten Activity in the azure data factory, let's start our demonstration, open your Azure data factory click on Pipeline and then select New pipeline on the left side of the dashboard. name the pipeline and then bring the data flow task in the working window, double click on the data flow then click on data flow debug, it will ask you for the debug time to live, in my case I am selecting 1 hour to debug time to live, and then click ok. 

Once we are ok with our debug settings now double click on the data flow and then click on the settings, in the settings tab and then click on the new, once we click on new it will open the editor so we can work on the data flow, click on ''add source'', then go to the source setting in the source setting we have data set option, click on + new button then it will open a new window in this window we have to create new data set, select azure blob storage then click on next, in the blob storage select the format as JSON and then click on continue.

Once you click on continue it will ask for a linked service, so we have to create a new linked service, click on + New then it will open a new window as shown in the picture below, select the Azure subscription and storage account name then click on test connection and finally click on Create. 


Fig-1: Create a new linked service.


Once our linked service is created, we have to select out input file in the set properties window and then click on ok. here I want to share one of the error which I have faced, once our data flow debug is ready click on the refresh button in the data preview tab to see the error which I am talking about, once I click on refresh it brings me an error which is shown in the picture below.


 Fig-2: Got an error during the process.



Solution: 
To resolve this error Click on the source option and then go to the JSON settings, inside the JSON settings, select the option ''Array of document'' which is shown in the picture below.


Fig-3: Solution of Error.


Once you select Document form as ''Array of document'' then go to the data preview tab and then click on refresh it will show the data without any error as shown in the picture below.


Fig-4: Successfully preview the data without having any error 


As our source 1 is created successfully now click on the + button and bring the flatten activity, click on the flatten activity and it will show the settings tab, inside the settings tab select the options as per your requirement as shown in the picture below.

Fig-5: Create flatten activity.


Once we are done with our 1st flatten activity click on the + sign again and bring another flatten activity,  in the second flatten activity select ''social media'' as unroll by: and then add another column named social media link as shown in the picture below. 


Fig-6: Create another flatten activity.


Once we are done with our 2nd flatten activity now click on the + sign and bring the sink activity, inside the sink activity we are going to create a new data set, to create a new data click on +New sign then select azure blob storage click continue then select CSV file as file format, then click on continue then select the linked service and choose the output container, then select the first row as header and select none for import schema, and then click ok, now go to the settings tab and select Output to single file as file name option and provide the file name, and then go the optimize and select ''Single partition'' once we are done with our sink activity go back to our pipeline and click on debug, as you can see in the picture below our data flow task is completed successfully without having any error.



Fig-7: Data flow task completed successfully. 



Video Demo: How to Read Json File with Multiple Arrays By using Flatten Activity ADF Tutorial 



How to Generate Sample Data online for SQL Server Table SQL Server Tutorial 2021

 Issue: How to Generate Sample Data online for SQL Server Table SQL Server Tutorial 2021.

In this article, we are going to learn How to Generate Sample Data online for SQL Server Table, this is a very common scenario and people always ask for the sample data for doing a different kinds of practices, so just to let you guys know that I am not an advertiser of this web site which I am going to show you in the demo, there could be more websites for generation sample data.

Open your browser and go to www.generatedata.com, it will open a window where you have to provide the information as you required in the sample data, in my case I am selecting the options as follows, 1. ID , then data type 2. Fname, the data type is names, 3. Lname, the data type is names, 4. Sale date, Data type is a date, then next option is rows, you can only generate 100 rows maximum, then select Generate in-page and click on generate.


Fig-1: Sample data generation website page

Once the data will be generated we will select all, and copy that data, as you can see in the picture below we have sample data in in-page format.

Fig-2: Generated sample data in in-page format.

Once data will be copied open you SQL Server management studio, go to your database in which you want to insert sample data, and select a new query, and paste the copied data there, and execute, as we can see in the picture below our execution is completed and it has 100 rows inserted.


Fig-3: Inserted sample data in the already created database.


Video Demo: How to Generate Sample Data online for SQL Server Table SQL Server Tutorial 









How to Take backup and Restore Azure Data Factory ADF Tutorial 2021

 Issue: How to Take backup and Restore Azure Data Factory ADF Tutorial 2021

In this article, we are going to learn how to take backup your Azure data factory and restore with a different name, let's start the demonstration, Open your azure data factory studio and then click on the manage tab in the manage tab click on the ARM template and then click on the Export ARM template as shown in the picture below.

Fig-1: Take a backup of Azure Data Factory.

Once export will be done, it will show the completion notification just click on that and go to the path where our backup is located, you will find a Zip file, to unzip right click on that and click extract all, once extraction will be completed open the extracted folder where you can see the files as shown in the picture below.

Fig-2: Extracted files of recent backup

As the backup process is completed, now we have to restore this data, to restore first of all we have to create a new data factory, for creating a new blank data factory open your Azure portal, and click on Data Factories and then click on + New button, it will open a window in which you have to provide the required information, as shown in the picture below, then click Review + Create and then click on create.

Fig-3: Create a new Azure Data Factory

Once our data factory is created, open Azure data factory studio, then click on the manage tab and then click on the ARM Template under source control tab, and then click on ''import on azure portal'' once you click on that it will open a new window in which we will select '' build your own template in editor'', once you click on that it will again open a new window, then click on Load File button and then select the file named ''arm_template.json'' from the folder which we have extracted from the zip file before, once you select the json file it will be shown as picture below, you just have to input your ADF name in the highlighted circle and then click Save.

Fig-4: Json file for the backup 

Once you save this file, a window will be opened and it will ask for some more information, you have to provide a subscription, resource group factory name, Input Folder_connection string, and Azure Sql database1_connection String, then Click on save + Create and then click on Create.





Once you click on create it will restore the ARM_Template to the new data factory.



Video Demo: How to Take backup and Restore Azure Data Factory ADF Tutorial 2021

How to Export SSIS Package from SSISDB from SQL Server SSIS Tutorial 2021

 Issue: How to Export SSIS Package from SSISDB from SQL Server SSIS Tutorial 2021.


In this article we are going to learn How to Export SSIS Package from SSISDB from SQL Server, in this article, we are going to export SSIS packages from SSISDB and then we will add this to the visual studio, let's start the demo, first of all, go to your SSMS and connect with your SQL server, then click and expand '' Integration Services Catalog then expand SSISDB then click on Projects click Packages under the project tab as shown in the picture below.


Fig-1: Export SQL Server Integration Project as .ispac

Right Click on My projects and click on export, once you click on export it will ask you for the path where you want to export the file, once you provide the name path and click on save, it will create an ''ispac'' file, now our ispac file is ready, go to the folder and right-click on the file and rename its extension, from .ispac to .zip and then right-click on the zip file and click on extract all file, once extraction will be completed it will create a new folder with all the files as shown in the picture below.


Fig-2: Exported SQL Server Integration package file 

Right-click on the SSIS Package file and open the visual studio, in my case I have Visual Studio 2017, open the project and right-click on the Package and paste your copied file here, now you can easily use this file as you need. 



Video Demo: How to Export SSIS Package from SSISDB from SQL Server SSIS Tutorial 2021

How to Pass Parameters to SQL query in Azure Data Factory ADF Tutorial 2021.

 Issue: How to Pass Parameters to SQL query in Azure Data Factory ADF Tutorial 2021.


In this article, we are going to learn how to pass parameters to SQL query in Azure data factory, lets start our demonstration first of all we have to create a pipeline in the azure data factory, let's open your Azure data factory portal and go to the author then go to the pipeline and click on New pipeline, once you click on new pipeline it will open a new window, in this new window, first of all, we have to create two parameters, in my case I have to create two parameters by clicking +New button, which is shown in the picture below. 

 Fig-1: Create parameters for the pipeline in Azure Data Factory.


After creating the parameters click and drag the copy activity in the working window, inside the copy activity go to the source and click on the + new button, select Azure SQL Database, then click and then create a new linked service as shown in the picture below, name this link service, then select Azure subscription, Server name, Database name, Authentication type, User name, Password then test the connection and click on create.



Fig-2: Create linked service for copy activity in azure data factory.


Once our linked service is created, come back to the source, in the source tab we have three options which are, 1. Table, 2. Query, 3. Stored procedure, we will go with the Query click on that and then click on ''add dynamic content [Alt+Shift+D]'' as shown in the picture below. 



Fig-3: Select the pipeline source as a query.


Once we click on ''add dynamic content [Alt+Shift+D]'' it will open a window where you have to insert the query @contact('Select * from dbo.totalsale where Region=''',pipeline().parameters.Region,'''') 
which shown in the picture below.

Fig-4: query for our parameters in Azure data factory.


Once we entered the query click ok and click on debug, once the debug is completed go to the storage and click on the container and then open the output folder, It will show the file,  click on the file and then click on edit and it will show all the records as shown in the picture below, related to our selected parameter which is the region, 


Fig-5: It shows the records as per our executed query. 


The above picture shows the record as per our selected parameter now we are going to select our 2nd parameter which is country, lets write the query and select both parameters,
@contact('Select * from dbo.totalsale where Region=''',pipeline().parameters.Region,'''',' and Country=''',pipeline().parameters.country,'''')  Click on ok then click on debug as shown in the picture below.


Fig-6: Write another query with different parameters.


Once the query is completed successfully go back to the storage and then click on the container and go to the folder and select the file and go to the edit, as you can see in the picture below this time it only chooses the data from Pakistan which is the second parameter.



Fig-7: It shows the records as per our executed query.




Video Demo: How to Pass Parameters to SQL query in Azure Data Factory ADF Tutorial 2021.

















How to Delete all files from Folder - Delete Activity in Azure Data Factory ADF Tutorial 2021

 Issue: How to Delete all files from Folder - Delete Activity in Azure Data Factory ADF Tutorial 2021

In this article, we are going to learn how to delete all files from a folder,  we are going to perform a demonstration in which we will delete some files from different containers, let's start our demo, open the Azure Data Factory Portal, and create a new blob storage, click on Blob Storage and then click +New button, it will open a window in which we have to fill the required fields, which are 1. Subscription, 2. Resource group, 3. Storage account name, 4. region, and then click on Review + create and then click on create as shown in the picture below. 


Fig-1: Create a new azure blob storage

Once our blob storage is created we are going to create some containers their and then we will upload some files, let's for to the resource which we have created and click on the containers at the left side of the dashboard under the Data storage tab, click on + New container and give the name and then click on create, in my case I have given the name ''input'' as my first container then again click on +New and create another container as we created earlier, you can see in the picture below that our containers are created.

Fig-2: Create data containers for performing delete activity 

Once our containers are created, we have to upload files in that containers, as you can see in the picture below that we have uploaded some files which are in different formats ie. .txt, .csv, .xlsx etc.


Fig-3: uploaded files in the input container.

As our files are uploaded, now we have to perform the delete activity, so let's go to the Azure data factory and click on the author, and here click on Pipeline, then click on + sign and then click on the pipeline, it will open a window in that window we have given the name of our pipeline, then search for the delete activity and drag it into the working window. double click on delete activity and define the source, so go to the dataset in the source tab and click on the + New button, it will open a window and in that window search for the Azure blob storage and select the Azure blob storage, then click on continue in the new window give the name of your blob storage and then create the linked service, once you click on new linked service it will open a window which is shown in the picture below, give the name of linked service and then click on create. and select the path folder in my case it is the Input folder and then click ok.


           

Fig-4: Create a new linked service.

Once our dataset is created now I have to delete only .txt files, click on the wild card, and in wild card name write *.txt that means it will select only .txt files now click on the logging settings and select the logging account linked service, in my case, it is blobstoragelnk, and select the folder path for the logs of the deleted files, then back to the dashboard and click on debug as you can see in the picture below there are 3 .txt files has been deleted. 

Fig-5: Shown summary of deleted files which were selected by *.txt

As we perform the delete activity and deleted selected files now let's delete all files in the folder, click on the delete activity and go to the source tab now in the source tab if you remember we wite *.txt for deleting .txt files, let's remove the *.txt and write *.* and click on debug, now as you can see in the picture below we have deleted all the 6 files. 


Fig-6: Shown summary of deleted files which were selected by *.*



Video Demo: How to Delete all files from Folder - Delete Activity in Azure Data Factory ADF Tutorial 2021



How to upload files from On-Prem to Azure Data Lake Gen2 by using SSIS Package

Issue : How to upload files from On-Prem to Azure Data Lake Gen2 by using SSIS Package 

In this article we are going to learn how to upload files from on-premises to Azure data lake Gen2, by using the (SSIS) SQL Server Integration Sservices package, first of all, we have to create azure data lake storage Generation 2, for creating ADLS Gen2, Open Azure Portal, and in the search bar search for the Storage accounts, and click on that, it will open a page just fill the required fields as shown in the picture below, which are, Subscription, Resource group, Storage account name, region, performance, Redundancy and then click on advance and select the ''Enable Hierarchical namespace'' which means ADLS Gen2 type of storage, if you don't do that that will be create a normal blob storage, click Review + create and then click on Create.



Meanwhile, the Azure Data Lake Storage Gen2 is being created, let's go to Visual Studio 2017, and create SQL Server Integration Package, for creating SSIS package, Click on File then click on new + project then select integration services project then name the project and click ok as shown in the picture below.


Once our SQL Server Integration Services package is created right-click on the project and go to the properties and change the target server version, in my case I am using visual studio 2017, change it 2017, and click ok, now go to the toolbox at the left side and click on the ''flexible file task'' under the azure tab and drag it,  double click on that a new window will open which is shown in the picture below, in this window gove the source folder path, then source file name, select the destination connection type and then create a connection, click on the new connection and select the ''Data Lake Storage Gen2'', then give the account name, account key and then click ok, as the connection is made then select the Destination Folder path in my case it in ''Input'', then provide the file path and click ok and Execute the task.


Once our task is executed go to the progress it will show the copied file here which we provide, now if you want to copy all the files just go back to your flexible file task editor and change the source file name as ''*'' and the Destination Filename will be blank as shown in the picture below..


After the change in flexible task editor go to the dashboard and execute task again, and go to the progress it will copy all those files which are existing in that folder.




Video Demo: How to Upload Files from On-Prem to Azure Data Lake Gen2 by using SSIS Package - SSIS Tutorial 2021