Solutions are always tailored to specific problems. There’s no one-size-fits-all approach. The techniques vary depending on needs like the level of customization, available data sources, and system complexity. The strategy should be based on these factors. Here are a few alternative approaches we can consider, if RAG is optional:
Embedding-Based Search with Pretrained Models: This is a relatively easy approach to implement, but it doesn’t offer the same capabilities as RAG. It works well when simple retrieval is enough and there’s no need for complex reasoning.
– Knowledge Graphs with AI Integration: Best for situations where structured reasoning and relationships are key. It requires manual effort and can be tricky to integrate, but it offers powerful semantic search capabilities and supports reasoning tasks.
– Fine-Tuned Language Models: This is ideal for stable, well-defined datasets where real-time data isn’t crucial. Since the data is straightforward, generating responses is easier. It performs well when the data is comprehensive but may struggle with queries outside the trained data.
– Hybrid Models: A mix of retrieval and in-context learning. While it’s a bit more complex to implement, it delivers high accuracy and flexibility because it combines different techniques. Use this when you need high accuracy and rich content.
– Multi-Modal Models: These models handle different types of data (eg., images, text) and provide combined insights. For example, they can retrieve images from documents and analyze them. However, they require solid infrastructure, which can get expensive.
– Rule-Based Systems: These expert systems rely on predefined rules to generate responses. They’re great for regulated industries like finance & legal, as they offer transparency and auditability. However, they’re typically not scalable and may not handle unstructured data effectively.
– End-to-End Neural Networks (for Q&A): These models are trained specifically for question-answering tasks. They perform well for defined tasks like Q&A and give concise answers without the need for complex pipelines. But they require large, annotated datasets and may underperform if there isn’t enough related data.
Since this field is still evolving, it’s important to stay on the lookout for new or improved techniques based on the specific requirements
How to attach a file to HubSpot contact through APIs, in Power Automate
With this article, the issues and solutions I am trying to address are:
- How to use HubSpot APIs
- How to upload a file to HubSpot
- How to make association between a Contact and the uploaded file
- How to achieve all of these using Power Automate
On a high level, below is the list of API tasks we have to do to achieve this. We have to make requests to three calls (one less if you have Customer ID already).
High level Power Automate flow
See below the high level flow I have created to demonstrate:
Prerequisites
- Basic knowledge in HubSpot. We would be interacting the CRM Contacts and Library
- Good knowledge in Power Automate
- A HubSpot account – Sandbox / Developer
- Create a Private App in HubSpot
- Get an Access Token
- Create a sample contact in HubSpot (for testing)
- Create a folder in the HubSpot Library (optional)
Detailed Flow
In this example, I am using a HTTP Request Trigger, which starts when we POST a file and a email-id (I used it to represent a customer in HubSpot contact form). I am doing some
Set the stage
For those struggling to get few items from the POST’ed content, find below details for your reference.
Using Postman to trigger a Power Automate HTTP flow
A. Get File content
triggerOutputs()['body']['$multipart'][0]['body']
B. Get filename
Honestly, I am not sure if there is a straight forward method available. This is my way of parsing filename from the Content-Disposition string:
// Input sample: "headers": {
// "Content-Disposition": "form-data; name=\"email\"",
// "Content-Length": "25"
// }
// Power fx
trim(concat(split(split(triggerOutputs()['body']['$multipart'][0]['headers']['Content-Disposition'], 'filename="')[1], '"')[0]))
C. Get Email
triggerOutputs()['body']['$multipart'][1]['body']['$content']
HubSpot API Calls
Since the APIs for fetching Contact ID and Uploaded File ID are independent tasks, I am executing them in parallel.
A. Upload file, and get file id
I have used only minimum number parameters to avoid the confusion. I am uploading the file to a folder in the library.
Find the POST’ing JSON script for the Body. I see many people are struggling to make this correct.
{
"$content-type": "multipart/form-data",
"$multipart": [
{
"headers": {
"Content-Disposition": "form-data; name=\"folderPath\""
},
"body": "/Sample Folder"
},
{
"headers": {
"Content-Disposition": "form-data; name=\"file\"; filename=\"@{variables('Filename')}\""
},
"body": @{variables('file')}
},
{
"headers": {
"Content-Disposition": "form-data; name=\"options\""
},
"body": {
"access": "PRIVATE"
}
}
]
}
Upon successful upload, you will be receiving a JSON response with File ID.
B. Get the Customer ID of a contact using email as parameter.
This is a straight forward task. Just call the API using GET. Example: https://api.hubapi.com/crm/v3/objects/contacts/john@mathew.com?idProperty=email
C. Create Contact-File association
Once you have FileID and Contact ID, now you can POST a create-note API call to https://api.hubapi.com/crm/v3/objects/notes
Below is the JSON body you have to use:
{
"properties": {
"hs_timestamp": "2024-07-30T10:30:00.000Z",
"hs_attachment_ids": "<File ID>"
},
"associations": [
{
"to": {
"id": "<Customer ID>"
},
"types": [
{
"associationCategory": "HUBSPOT_DEFINED",
"associationTypeId": 10
}
]
}
]
}
Note: associationTypeId is the magic number which tells the API to make Contact-File association. Please check the documentation for more association types.
Find the Power Automate action view:
Verify if the flow has worked
Go to HubSpot, select contact and you should be able to see the file attached.
Additionally, if you go to the Library -> Files -> Sample Folder, you can see the same file appearing there.
Certified: AI for Product Management
I’m happy to share that I’ve obtained a new certification: AI for Product Management from Pendo.io!
Verify: https://www.credly.com/badges/10e7acce-1f49-49f4-b348-33e3568f7c29/public_url
Dotnet 9 preview-1 JsonSerializerOptions features
Slide deck and video recording for the Cloud Security Session
Below is the slide deck used for the session “Securing the Skies: Navigating Cloud Security Challenges and Beyond” for FDPPI
Microsoft Learn Learning path completed – Develop Generative AI solutions with Azure OpenAI Service
Vlog: Create an Azure Open AI instance
This is a screen recording demonstrating how to create a basic Open AI instance in Azure Portal.
Google Cloud Generative AI course
Video: How to SFTP to Azure Blob Storage
Subscribe to my channel for more related videos https://www.youtube.com/LearnNow1
How to read Azure KeyVault secrets using Managed Identity in .NET Framework 4.8 C#
Using Managed Identity to deploy azure resources is considered best practice as it reduces the overhead of keeping additional credentials (tokens/passwords) in config files. This article is about accessing Auzre KeyVault using Managed Identity. I am using .NET Framework 4.8 version for this tutorial.
Step 1 – Create KeyVault and secrets
First, just go to Azure Portal and create necessary secret values for testing. I would go with a “testkey” and a dummy value.
(I am assuming you know the basics of Azure Portal and knows how to create an azure resource such as KeyVault)
Also, please take a note of the “Vault URI” you can see in the “Overview” section. We would require it in the C# Code.
Step 2 – Create Sample .NET App
Next, open Visual Studio (I have used 2022) and start a new project. I have used a .NET Framework 4.8 Console Application.
Step 3 – Install necessary NuGet packages
We require two major packages for this project. Install these:
1. Azure.Identity
2. Azure.Security.KeyVault.Secrets
Step 4 – Coding!
This is the sample code I have used. Make sure to replace with your keyvault URL.
using System;
using Azure.Identity;
using Azure.Security.KeyVault.Secrets;
namespace kvtest
{
internal class Program
{
static void Main(string[] args)
{
SecretClient secretClient = new SecretClient(new Uri("https://your-keyvault.vault.azure.net/"), new DefaultAzureCredential());
var secret = secretClient.GetSecret("testkey");
Console.WriteLine(secret.Value.Value);
Console.ReadKey();
}
}
}
Notice the “DefaultAzureCredential()”, which does the trick of our Managed Identity, without providing plain credentials here.
Step 4 – Login to Azure
If you “run” your app at this stage, you will end up getting an error like the one below. This is because, currently you do not have any connection between your laptop and azure portal. This application will work if you host this in Azure, in any resources like App Service but you cannot run this in your developer laptop/machine if you want to debug.
To make your app debug-able in your machine, you have to let Visual Studio login to Azure.
Go to Tools –> Options –> Azure Service Authentication
and, login to your account there.
Step 5 – Execute!
Now we are all set for building and running the app. Just hit F5!