Aakash Gupta
Aakash Gupta
@skylord999.bsky.social
Building Think Evolve an award winning AI labs with a focus on computer vision, NLP and GenAI.

We are passionate about application of AI for Change.

www.thinkevolveconsulting.com
Our recently published paper on LLM Safety
The study shows that ~90% of the available models (open LLMs + paid ones) will degrade under attack.
This opens up a Pandora's box of unanswered questions -- How safe are enterprise grade apps with LLM integrations?

www.linkedin.com/feed/update/...
Sharing our recent whitepaper in collaboration with MLCommons As Large language models become embedded into various applications and agents. There is a likelihood of them becoming a security risk… | ...
Sharing our recent whitepaper in collaboration with MLCommons As Large language models become embedded into various applications and agents. There is a likelihood of them becoming a security risk. In...
www.linkedin.com
November 13, 2025 at 6:04 AM
You create a WhatsApp group, it even gains a lot of engagement. But then you struggle with moderation. Users simply dont want to follow the WhatsApp group rules.

We vibe coded a WhatsApp agent which moderates messages across groups.
October 27, 2025 at 6:57 AM
I have created a microsite where you can do the same. The Best part is that its completely local and the opensource package can run on a cpu instance. So no need to run any complex workloads on a GPU cloud, but a voice can be cloned and replicated on your laptop.

youtu.be/XTSp0Q-90bA
October 21, 2025 at 10:59 AM
🚀 New video: Fine-tuning a <150MB LLM on 5.8M+ medical Q&A samples.
Runs on mobile or laptop — no GPU required!
Watch here 👉
youtu.be/GOQRKzrM3gA
Gemini 270m Fine tuning with MIRIAD dataset
YouTube video by Think Evolve Consultancy
youtu.be
August 29, 2025 at 11:42 AM
LLMs are Language models. The latest version of ChatGPT5 appears to be experiencing a hallucination issue when asked a simple query.

The solution is to encourage them to think more critically and provide logical steps to their response.
August 9, 2025 at 4:49 AM
A query on Sam Altman's investment gets stalled on Gemini. What could be the reason?
May 30, 2025 at 2:44 AM
I often feel socially anxious while speaking, so I keep notes—but glancing down can feel awkward. Inspired by pro teleprompters, I built one for video calls. "Smooth Teleprompter" is a free Chrome extension we made with Replit, using our playful “vibe coding” approach to dev.
May 19, 2025 at 10:26 AM
Rendering Blade Runner 2049 final scene

And in that final breath,
A machine finds something almost human.
Snow drifts through the poisoned sky, silent and slow,
a quiet witness to grace where none was expected.
It falls on steel and sorrow, soft as forgiveness...
April 24, 2025 at 10:15 AM
I decided to experiment with setting up WhatsApp Model Context Protocol (MCP) on a Windows system. Though Windows isn't ideally supported by the Model Context protocol, I wanted to create a comprehensive guide to help others navigate this process. (1/3)

www.youtube.com/watch?v=-B5x...
WhatsAPP MCP Windows Installation
YouTube video by Think Evolve Consultancy
www.youtube.com
April 22, 2025 at 10:56 AM
Agentic Systems exhibit autonomy, decision-making, and adaptability in achieving goals. They can analyze data, take actions, and refine their approach based on feedback, often functioning with minimal human intervention.

#DeepSeek #PersonalAssistant #AIforall

youtu.be/JaZvkpgnXck
DeepSeek R1 Prevent Output in Chinese
YouTube video by Think Evolve Consultancy
youtu.be
February 27, 2025 at 8:39 AM
The seemingly "simple" problem statements, their clarity masking decades of complexity. Unraveled, layer by layer, with each genuine interaction.
February 26, 2025 at 4:31 PM
The Importance of Fine-Tuning Large Language Models (LLMs)

Fine-tuning is a crucial step that unlocks the full potential of pre-trained models by adapting them to specific tasks and domains. Here’s why it matters, along with some practical examples:

(1/n)
February 24, 2025 at 3:56 AM
Built a thread simulation in Replit in 30 minutes, despite no physics or coding background. Results are basic but eerily impressive. The last iteration to the first

Iteration 3
February 24, 2025 at 12:15 AM
Pre-Training vs. Fine-Tuning: Key Differences

Understanding the distinction between pre-training and fine-tuning is essential for leveraging large language models (LLMs) effectively. Here's a breakdown:

(1/n)
February 22, 2025 at 8:42 PM
🚀 Fine-Tuning Multimodal LLMs or MLLMs: A Deep Dive into Efficiency 🌐

Adapting large models to specific multimodal tasks is more efficient with Parameter-Efficient Fine-Tuning (PEFT) techniques like LoRA, QLoRA & LLM-Adapters. Here's how these approaches are driving innovation: (1/n)
February 21, 2025 at 5:42 PM
🚀 Fine-Tuning Multimodal Large Language Models (MLLMs) Made Efficient!

Adapting MLLMs to specific tasks is now faster & lighter with PEFT techniques like LoRA, QLoRA, and LLM-Adapters. These methods reduce resource needs while maintaining accuracy. Let’s dive in! ⬇️ (1/n)
February 20, 2025 at 5:56 PM
🔍 Advanced Fine-Tuning Techniques

🚀 DyLoRA: Dynamically adjusts training priorities for smarter updates.
🔗 LoRA-FA: Freezes specific matrix components for efficiency.
🧠 EAS: Reduces attention costs without losing accuracy.
🎨 MemVP: Uses visual prompts for better image-text fusion.
February 19, 2025 at 5:56 PM
✅ LoRA: Uses matrix factorization to reduce parameters, making fine-tuning efficient without losing performance.
✅ LLM-Adapters: Add modular, task-specific tuners, improving flexibility & precision.

Both methods help train large models with fewer resources, making AI more accessible! 🌟
February 18, 2025 at 5:56 PM
While working with new machine learning techniques -- first develop an intution. We’ve always worked with diverse teams, often from non-computer science backgrounds, which can make the learning curve steep. (1/n)
February 17, 2025 at 7:31 PM
Contrastive learning in construction for visual inspection (detect defects in structures by comparing normal vs. faulty images), anomaly detection (flag unusual worker movements via CCTV), and equipment tracking (identify vehicle types from GPS/telemetry data). Boosts automation & safety! 🚧
February 17, 2025 at 5:56 PM
Contrastive learning is transforming AI by teaching models to differentiate similar and dissimilar data points. It minimizes "contrastive loss" by pulling similar data closer and pushing dissimilar ones apart.

🧠 Pre-training: Models align image-text pairs using separate encoders.
📝 Captioning: La
February 13, 2025 at 5:56 AM
what are vision transformers. This is a test
February 12, 2025 at 5:57 PM
Apache Superset - is an open-source modern data exploration and data visualization platform. Superset can replace or augment proprietary business intelligence tools for many teams.
February 8, 2025 at 10:26 AM
Our initial research suggests that this year could see the highest number of wildfires recorded in any season in India.

2 primary factors contribute to it:
1. Extreme summer temperatures;
2. Widespread flowering of bamboo groves across India,
January 14, 2025 at 7:05 AM
A gentle touch is what most require
January 9, 2025 at 8:19 AM