The modern software architect is often bottlenecked not by a lack of knowledge, but by the sheer volume of context they must maintain. Scalability constraints, compliance guardrails, cloud service limits, and legacy entanglements create a cognitive burden that makes pure design difficult. Read my new article on – How AI is redefining the Application Architecture Design process – https://www.linkedin.com/pulse/how-ai-redefining-system-design-process-praveen-zohtc
-
-
Navigating the fast-evolving world of LLMs requires more than just innovation, it demands discipline. In my latest piece, A Sanity Guide to Version Control for LLMs, I break down practical strategies to bring order, traceability, and confidence to AI development workflows. From managing model iterations to ensuring reproducibility, this guide offers a structured approach to keep experimentation aligned with enterprise needs. Read my full article here – https://www.linkedin.com/pulse/sanity-guide-version-control-llms-praveen-nair-pmp-architect-irpgc/
-
Read my latest article on agentic AI adoption in Platform Engineering on Medium. https://ninethsense1.medium.com/platform-engineering-2-0-the-rise-of-ai-native-devops-5b6c9510decc?sk=eadef43fd8cf604c21105ff055d9aead We spent years asking developers to care about infrastructure, aka follow the DevOps culture. AI-Native Platform Engineering now effectively says, “Actually, forget it. The AI handles the infrastructure now! You just focus on code”
-
This article is a write up of my experience on hosting Qwen 2.5 the 0.5B model in Raspberry Pi 2 using Llama.cpp. Qwen 2.5 is one of the small SLM with 0.5B parameters so a small development board like Raspberry Pi can hold it. RPi 2 Model B comes with 900Mhz speed and only 1GB of memory. But to be honest, setting up the project might take 1-2 hours, and the prompt execution is only some 1-2 tokens per second. So you need to be patient. Let us begin! Step 0: Pick up the Raspberry Pi 2 from the attic.…
-
Over the weekend, I was trying do a learning project using Microsoft’s latest Microsoft Fara-7B model, which is a “Computer Use Agent”. Microsoft Fara-7B is a small, efficient, open-weight AI model (with 7 billion parameters) designed to act as a Computer Use Agent (CUA), allowing it to perform tasks on a computer by visually understanding the screen (aka screenshots) and using mouse/keyboard actions (clicks, typing, scrolling) to automate web tasks like booking travel, shopping, or filling forms, offering speed, privacy (runs locally on devices), and lower cost compared to larger models. Currently the model is still in experimental mode, and…
-
-
Takeaways Read the Whitepaper here: https://www.linkedin.com/posts/ninethsense_the-deterministic-ai-agent-a-dual-brain-activity-7402527472975568896-xW1k?utm_source=share&utm_medium=member_desktop&rcm=ACoAAAEqPm8Bief48CxwsnTrzIyprD5rdLx_zjU
-
How should a technical leader respond when a customer asks to add AI to an existing application? The answer requires structure and clear thinking. 1. First, clarify the Actual ProblemNever assume that AI is the right solution. I would start by understanding the business objective. Many requests framed as AI needs turn out to be workflow issues, reporting gaps, or rule based automation opportunities. Accurate problem definition prevents unnecessary complexity. 2. Evaluate Data/App ReadinessAI depends completely on data quality. Assess what data exists, how clean it is, and whether privacy or compliance concerns limit its use. If data foundations are…
-
The term stochastic parrot was introduced in a 2021 paper by Bender, Gebru, and colleagues (ref: Wikipedia). It highlights a fundamental limitation of large language models. These systems generate text by predicting the next token based on statistical patterns. They do not possess grounded understanding of the world. This can lead to convincing output that is incorrect, biased, or superficial. What the metaphor captures is simple: the probabilistic, statistically driven nature of these models. Parrot evokes an entity that mimics language without real understanding. The critique is not about style. It is about reliability. When a model draws from vast…
-
Large Language Models are probabilistic. They predict the next most likely word. When you ask them to “critique,” you populate their context window with high quality reasoning and negative constraints (eg. what not to do). The final generation is then statistically more likely to follow that higher standard because the logic is now part of the immediate conversation history. Try this: Draft: Ask for your content as usual. “Write a cold email to a potential client about our new web design services.” Critique: Dont just ask for a better version. Ask the AI to analyze its draft against specific criteria.…