Join two XML files using XSL Transformation

This blog demonstrates how you can JOIN (yes, like in SQL) two xml (datasets) to one using a common ID (relationship).

shop.xml


<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type='text/xsl' href='main.xsl'?>
<shop>
<product>
<id>100</id>
<title>hello</title>
</product>

<product>
<id>101</id>
<title>world</title>
</product>
<product>
<id>102</id>
<title>praveen</title>
</product>
</shop>

price.xml

<?xml version="1.0" encoding="UTF-8"?>
<shop>
<product>
<id>100</id>
<price>10.0</price>
</product>
<product>
<id>101</id>
<price>10.1</price>
</product>
<product>
<id>102</id>
<price>10.2</price>
</product>
<product>
<id>103</id>
<price>10.3</price>
</product>

</shop>

main.xsl


<xsl:stylesheet version="1.0" xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
<xsl:output method="xml" version="1.0" encoding="UTF-8" indent="yes" />

<xsl:param name="fileName" select="'price.xml'" />

<xsl:variable name="updateItems" select="document($fileName)/shop/product" />

<xsl:template match="@*|node()">
<xsl:copy>
<xsl:apply-templates select="@*|node()" />
<xsl:copy-of select="$updateItems[id=current()/id]/price" />
</xsl:copy>
</xsl:template>
</xsl:stylesheet>

Resulting XML, after transformation should look like:


<?xml version="1.0" encoding="utf-8"?>
<?xml-stylesheet type='text/xsl' href='main.xsl'?>
<shop>
<product>
<id>100</id>
<title>hello</title>
<price>10.0</price>
</product>
<product>
<id>101</id>
<title>world</title>
<price>10.1</price>
</product>
<product>
<id>102</id>
<title>praveen</title>
<price>10.2</price>
</product>
</shop>

Bonus code, if some .net developers want a transformation code:


XslTransform xslt = new XslTransform();
xslt.Load(@"D:\Websites\xmltest\main.xsl");
xslt.Transform(@"D:\Websites\xmltest\shop.xml", @"D:\Websites\xmltest\out.xml");
textBox1.Text = System.IO.File.ReadAllText(@"D:\Websites\xmltest\out.xml");

SSAS: Dimension Relationships in Cubes

“Dimension relationship” refers to the direct or indirect relationships between dimension and its measure groups in a Cube.

Regular Refers to a standard relationship, when a Key column in the dimension is directly joined to fact table.
Reference When a Key column in the dimension is indirectly joined to fact table by referencing another dimension.
Fact / Degenerate Dimensions constructed from attribute columns in fact tables than from attribute columns in dimension tables.
Many-to-Many One dimension is associated with multiple facts

Read more: https://docs.microsoft.com/en-us/sql/analysis-services/multidimensional-models-olap-logical-cube-objects/dimension-relationships?view=sql-server-2017

Note: My study notes

Common RAID levels explained

  • RAID 0 – Disk Striping

– Used for the storage of noncritical items but which requires fast read-write.
– Does not have parity (parity is about checking whether the data has been lost or overwritten on transition)
– Does not have redundancy or fault tolerance. i.e., when the drive dies, the data is lost.

  • RAID 1 – Disk Mirroring

– Used usually for OS, SQL Engine etc. installation
– Two or more disks used to write data and is in parallel
– High performance
– High availability
– No data loss on disk failure

  • RAID 2
  • RAID 3
  • RAID 4
  • RAID 5 – Striping with Parity

Requires 3-16 drives
No data loss on disk failure
Read is faster but write can be slower
Failures can impact throughput

  • RAID 6 – Striping with Double Parity

Requires min. 4 drives.
Two drives are used for storing parity data
Read is faster but write can be slower than RAID 5
More secure than RAID 5

  • RAID 7
  • RAID 10 / RAID 1+0 – Striped Set of Mirrors

10 means combining 1 and 0, and not “ten”
Combines disk mirroring and disk striping
Requires minimum 4 disks
Best choice for I/O intensive applications

Note: Blog incomplete. Will be updated.
Note: My learning notes, source: Internet

Programming Puzzle #2 – Leet Converter

Write a program in a computer language of your choice to convert any given text to “leet format” in real time.

Leet (or “1337”), is a system of modified spellings used primarily on the Internet.

Input: “Translator” Output:”Tr4nsl4t0r”
Input: “leet”, Output: “l33t”
Input: “Good Morning”, Output: “G00d M0rn1ng”

Evaluation criteria:

  1. Code Quality Standards
  2. OOAD/Object-Oriented Analysis & Design
  3. Application Logic
  4. Exception Handling
  5. Simplicity and Effectiveness of code

Time: 0-30 minutes max.

Programming Puzzle #1–Find the critical path

Write a program in a language of your choice to find the critical path from a given set of tasks.

A critical path is determined by identifying the longest stretch of dependent activities and measuring the time required to complete them from start to finish.

image

Each circle (A-G) are tasks with specific duration (in Hours).

Input:

Array of task names and duration given in the diagram.

Output

1. Longest path (Critical path) is A+G+B+F+C+D (42Hrs)
2. Shortest path is A+B+C+D (26 Hrs)

Why Cosmos DB may not be apt for building Data Warehouse?

Well, the question is slightly wrong until the context is specified because it is possible to build Modern Data Warehouse by including Cosmos DB in the architecture. This is too much relevant today because the data is no more straight forward content with human readable entities and relations (structured), but unstructured and/or streaming too. Also the pace of the data flow, or business requirement is becoming near real-time.

See a reference architecture below:

c1Image Source: MS Docs

Here, in this blog, the context is about Traditional Data Warehouse possibility, where you will be modelling the data, specifying relationships, etc. Let us look at the definition of Data Warehouse mentioned in Oracle Docs:

“A data warehouse is a relational database that is designed for query and analysis rather than for transaction processing.”

Now let us ask the right question – Why Cosmos DB may not be apt for using as a data store in a Data Warehouse? – It is not apt, because, Cosmos DB is a NoSQL database where it is literally not easy to draw relationships between entities/tables/data. Check what MSDN blog said about this:

“Cosmos DB is not a relational database. You cannot just take your relational database and expect it to run in Cosmos DB. You could move tables of data into Cosmos, but not the relational aspects of your existing data structures.”

As of today, this is the conclusion. But we cannot say tomorrow what will happen to these concepts because Cosmos DB is becoming powerful and I am already in love with it.

You can read common scenarios (use cases) where you can use, or the companies use Cosmos DB here.

Do you have different thoughts on this? Please comment.

Getting started with Azure Databricks

Introduction

What is Azure Databricks?

Azure Databricks is the same Apache Databricks, but a managed version by Azure. This managed service allows data scientists, developers, and analysts to create, analyse and visualize data science projects in cloud.

Databricks is a user friendly, analytics platform built on top of Apache Spark. Databricks acts as an UI layer, a WYSIWYG dashboard where you can create clusters, manage notebooks, write code and analyse data without knowing the internals of the system. Apache Spark is a unified analytics engine for large scale data processing and currently it supports popular languages such as Python, Scala, SQL and R.

About the article

If you know Apache Databricks already, then a tutorial is not necessary to get started because Azure Databricks also uses the same management portal used by Databricks.

Though there are different strategies possible to create and manage Databricks projects, I have followed below flow in this article:

image

Screenshots and steps provided in this article are valid as on 20 Sept 2018. Advancement in technology happening at a faster pace so as the Azure portal upgrades. So, please be aware of any portal flow changes when you try out the same. I will try to keep this tutorial up to date.

Login to Azure Portal

You must be having at least a trial account to get started. Visit Azure home page to get one – https://azure.microsoft.com/

Step 1: Create your first Databricks workspace

First step in creating a Databricks project is by creating a Workspace.

Typical steps will be to click “+ Create a resource” à “Analytics” à Azure Databricks

image

In the workspace creation wizard, you will have to provide below details:

A. Workspace name: Give a unique name (retry until you get a green tick mark at the right. You get a red X mark because someone already took your favourite names).

B. Subscription: Choose an appropriate subscription plan, or leave the default value if you do not know what this is about

C. Resource Group: Choose an existing resource group, or give a new one. (Provide a new name if you do not know what this box is about)

D. Location: This is the data center. Select your nearest location in the dropdown, or keep the default

E. Pricing Tier: Now this is about cost so be careful. I would prefer to go with a Free trial if I am doing this for learning purpose. You can read more about the pricing tiers here.

image

Click “Create” button and wait till the workspace get created. This will take couple of minutes and you will get the notification once it is completed.

image

Once he workspace is created, you can go to “All resources” and click your newly created workspace name in the list.

image

The resource dashboard will look like this:

image

Now it is time for some action. Click “Launch Workspace” button, and you will be directed to a new browser page. You will be signed into the portal automatically.

Your Azure Databricks journey starts here.

image

From here, there are different strategies possible to execute projects. Since a full-fledged project which includes a meaningful data analysis is out of scope of this article, we will try out a simple example like querying a dataset or plotting a bar chart.

Let us load a dataset and visualize using a notebook.

For the purpose, I have downloaded a dataset from internet, which is about the literacy rate in India. You may also download a freely available one, or create a dataset of your own. We are not going to do any complex analysis in this example so this simple dataset is enough. May note that the values in the dataset are not real values. My CSV file looks like this, with first row as header row.

image

Create Cluster

For storing the data and doing processing, we need some powerful machines. Let us call it clusters and create one in this section.

On the dashboard, click on “New Cluster

I am giving the cluster a name “MyFirstCluster”. If you are good in Azure portal already then you know most of the input parameters mentioned in the page. Otherwise if you are a beginner, I suggest you to leave all the other settings ‘as it is’ and click “Create Cluster” button to proceed further.

image

It will take some time to complete the cluster creation. For me it took about 5-10 minutes. You can see the status of cluster creation in next screen.

image

Once the cluster is created, the status will change from “Pending” to “Running

image

Once the cluster is crated then we are read to upload data or creating notebooks. Let us upload the data first.

Upload data

Upload the already prepared/downloaded dataset to the newly created cluster.

Go back to the dashboard and click “Upload Data

image

In the next screen, give the dataset a name and upload the dataset. In my case I am using a CSV file with some 35 rows. Your dataset can be a bigger one but note that depending on the size of the dataset the upload and processing can take more time.

image

Once upload is completed, you can create the Notebook.

Create Notebook

A Notebook in the context is an interactive web based editor which allows data scientists, analysts and developers to write and collaborate scripts and notes to analyse and visualize.

You can either create the Notebook by clicking “Create Table” in the Dashboard screen, or as the continuation of the last step. When you click “Create Table in Notebook” button in the above screen, Databricks service will create sample notepad for you with sufficient sample code, with python as the default language.

image

Make sure that you have the cluster attached to this notepad. If you see “Detached” status at left-top side, then make sure to choose a cluster by clicking on the “detached” text. Without a cluster, you cannot run the scripts.

image

Now it is time to test the script. You can see the sample python scripts in various script boxes in the page. You can click on the play button you see on right-top side of any script snippet box:

image

You should be able to see the script getting executed and result will be displayed below in the form of a table. If there are errors, you will be provided with proper error messages which you can use to debug the script.

image

Now it is your time for experimenting and more learning.

As a bonus, let us see how to visualize the same data using a bar chart. Click on the bar chart icon. If you do not see any charts auto generated, then click “Plot Options” and play around with the parameters.

image

image

Click “Apply”, and now you can see the bar chart updated in the Notebook.

image

Happy Learning!

References:

  1. https://docs.microsoft.com/en-us/azure/azure-databricks/what-is-azure-databricks
  2. https://databricks.com/
  3. http://spark.apache.org/