Showing posts with label Big Tech. Show all posts
Showing posts with label Big Tech. Show all posts

Thursday, 22 August 2024

Platform rules

 I have just had a major realisation.

If you want your posts to reach more people:

Things that don’t matter:

Content

Presentation

Time of Posting

Length of post

Use of hashtags

 

Things that do matter:

The pleasure of the platform algo.

 

The posts that went viral were suggested by the platform to readers.

The posts that were ignored were NOT suggested by the platform to readers.

 

That’s it. That’s the epiphany.

******* 

What this means in plainspeak is that since we took our conversations and connections online, we created a layer that can control us - that platform. 

If you are on the old blogger platform, you will see a list of blogs you follow and their posts list, in reverse chronological order. Thats it. There is no filtering, no selection, no recommended posts. I am grateful for that bcs blogger is an old interface and no one wants to touch it at Google. (Thank God!) 

But there isn't a single other platform that brings the same honesty to the table. All other platforms - Linkedin, Facebook, Twitter, etc., create an opaque, dense layer of control that determines whose posts we will see and whose we will not. This algo also controls which ads we will be subjected to, and how many. 


Thursday, 23 November 2023

Yes, Sam Altman did breach trust. Worse, he made it hard for the world to collaborate.

https://www.msn.com/en-us/money/companies/openai-microsoft-hit-with-new-author-copyright-lawsuit-over-ai-training/ar-AA1kmo8Z

When Sam Altman was fired, some media house reported that the Board felt that with the Open AI Open Day, Sam had taken the organisation towards a commercial direction that did not fit with the original goals of Open AI. 


Open AI was built as a non profit, to create AI for ALL. All being the operative word here. Equitable access to AI capabilities is vital to an equitable world. Whether we like it or not, AI is the competitive advantage of the future. 

But the bigger issue is this - Open AI was trained on possibly yottabytes of data by ordinary citizens and creators of content - on the assumption that the LLM would be used for AI for All. Just like Wikipedia was created by millions of individual contributors giving their time and knowledge for free - on the premise that it was a free, open-to-all public encyclopedia. 

The Microsoft investment was the first dent in the "for ALL". In one stroke, only one company stood to gain the MOST from the LLM - from the contribution of millions of individuals whose work was consumed by Open AI to create Dalle2 and ChatGPT. 

The story is an action replay of the Wikipedia story. Google donated significantly to Wikipedia and magically, Wiki results started appearing on top of Google search results. Searchers found the best information on top, and Wiki got a lot more hits (and donation, of course). It was a win-win for both - but not for the creators who gave hours to create Wiki. They never got compensated. 

It is the same for Open AI. It used content from literally millions of creators to make Picasso like paintings, write in the style of so-and-so author, and write specific types of content - college essays, research papers, opinion pieces - based on the essays, papers, and opinions of people who did not, and never will, see a dime. 

The trend of a nonprofit creating something big and universal, only to sell it to the highest bidder, is not just against the moral principle. It is also a breach of trust. The creators who donated their time and knowledge generously, as they did prompt engineering and provided feedback to ChatGPT, were contributing to AI for ALL. They were not contributing to Bing's Image Generator or Bing ChatGPT.

Secondly, and more importantly, as a creator, why would I trust the next "Good for All"? 

I know I wouldn't, personally speaking. 


And to me, there is something very wrong with both these things. 

Which is why I love this news: 

https://www.reuters.com/legal/openai-microsoft-hit-with-new-author-copyright-lawsuit-over-ai-training-2023-11-21/


But here is something I cannot understand: 

Why would a non-profit need to monetise?

Someone was funding Open AI from 2015 to the time of the MS investment. Why was that model not sustainable? 

In unrelated news: 

Bard can now analyse Youtube videos and give you really intelligent answers. 

https://www.msn.com/en-in/money/topstories/google-bard-ai-can-now-watch-youtube-videos-and-answer-your-questions-here-s-how-to-use-new-feature/ar-AA1koDdn

But the millions of youtube creators that are helping Google monopolise the search market even more, will never see a dime of that multi-billion revenue. 

#InDefenceOfTheCommonMan


Thursday, 15 September 2022

How do we deal with Identity and Access Management for moonlighting employees?

Whether we like it or not, moonlighting is here to stay. The causes of the moonlighting effect are easy to understand: 

A. We now know that when the going gets tough, organisations can and do fire employees with no warning whatsoever. 

B. When the profits are good, the executives get the fattest bonus checks, but when there are losses, employees get the pink slips, not the managers who are responsible for PnL. 

Therefore, we arrive at the following axioms: 

A. Loyalty as a concept does not apply to the employer - employee relationship. It is a work for pay contract. 

B. An employee cannot rely on their employer for financial stability. They have to ensure it themselves. 

C. For a mid-level employee, the only resource they can deploy to earn the secondary income is their own skill. 


So, moonlighting is a legitimate response to conditions created by myopic employers. Because it makes common sense, it is here to stay. 


How do organisations prepare for moonlighting? 

Contrary to what we might think, moonlighting is not that dramatic a phenomenon. Our part time employees and freelancers have always been doing this - offering their skills and expertise for a limited time per day and getting paid for it. 

So, on the HR side, we have finally been able to create a policy guideline that will allow organisations to offer moonlighting as a legit business and employment practice (happy to share that if you'd like). 

But, what do we do about Identity and Access Management? 

And this is where it gets really tricky. A moonlighting employee presents a potential incident and data leakage vulnerability. 

How do we, as organisations, proactively create policies that will allow employees to use technology to remain productive, while managing the organisation's risk? 

How are you doing this? 

Friday, 12 November 2021

Afternoon Thoughts

 

Infosec is to Fintech what HSE is to Oil and Gas.


#AfternoonThoughts 

 

Just like OIl and Gas depends on security to keep its engines running, Fintech depends on Info security. One incident, and everything comes tumbling down and grinds to a halt. Coverup is a short term solution and perhaps the instinctive reaction, but as the oil and gas industry will tell you, its a poor strategy, and what's worse, doesn't work. 

The only good thing to do is to approach infosec like the Oil and Gas Industry approaches HSE - have transparent standards, invest in a clear security policy and ensure that every member of team is educated and compliant. Report transparently and periodically. Most importantly, learn from EVERY mistake. Each one of those recovered mistakes is going to save you from a larger disaster, and make no mistake, there will be larger incidents. Oh, and don't forget the Incident Management System. 

 

Wednesday, 3 November 2021

Secrets of Pixabay

As most readers might know, pixabay.com, pexels.com, unsplash.com etc are websites where photographers share their work. This work is available for free commercial reuse without attribution. 

I am a regular contributor to pixabay.com. While all my images are selected for publication, I was rather sad to note that they had 0 views and 0 downloads. I put that down to the poor quality of my photography. 

2 weeks ago, I was looking for a free image of mehndi or henna. There was NOTHING available on all 3 free websites. So, had to use a Wikimedia image. 

However, last night, on a lark, i decided to search for all images related to a label that is not very frequent and applies to a few of my submitted images. Then, scrolled to the end, and realised, much to my surprise, that though my images were tagged with those keywords, they did not come up in the search results for those keywords, even till the end. 

This means that Pixabay keeps some published images away from users even if they are approved and published. 

Pixabay calls this the differentiation between featured (in search results) and just published. There is no count of the number of images that are published but not featured, but as a user, it appears strange that a published image should not be searchable. What's the point, then? 

How does one deal with this? 

I found that if you follow an individual contributor, that might help. Look for images and when you see an interesting one, follow that contributor. That way, you can see their images in your network. 



Thursday, 1 April 2021

The illusion of choice

 Of course you can decide that your MS Word Normal should be Arial 12. But that doesn't mean a thing. MS has decided that Calibri 11 is the only normal, and that is what you will get.


You can still define your Normal and keep clicking on it. You have that option.

This is called "The illusion of choice".

Any other illusion of choice that comes to mind?

Monday, 11 January 2021

Whatsapp says no change, so why is everyone leaving?

 Remember this post: The Second Global Colonisation | LinkedIn

(Its also available on this blog now) 

That time in history when the Resident moved from being Advisor to de facto ruler? 

We are at that moment in history now. The coloniser is saying, slowly and surely, that they rule. 

Let's take a look at a few data points: 

A. If you have a Samsung phone, on the latest OS, check the number of apps that come activated by default. You cannot even disable the vast majority of them. From your physical activity to your passwords, from your digital health to your voice controls, Samsung wants it all. 

B. Go to Apps> Google Play Store > Permissions 

You will notice that there is a permission there called Physical Activity. You cannot disable it, even if you want to. Some years ago, when a researcher did a Man in the Middle attack to prove that Google knows when you are getting out of your car, there was much rad faced embarrassment. Now, there is no embarrassment. Google is telling you what it will do, and it will do it nonetheless. 

C. Whatsapp 

In the new privacy declaration, there is a clarity - we already do share the data, now we don't have to be apologetic about it, and the governments cannot be a pain to us any more by taking us to the court for violating privacy. No longer will the EU, America, or Canada be able to summon the heads of big tech and impose fines for privacy violation. 


 

D. After the Capitol Hill riots, the account that was suspended was of Donald Trump, not of the rioters. Both Twitter and FB absolutely knew who the rioters were, but they suspended the account of Donald Trump. In Mughal era terms , that was the Resident cutting off access of the emperor to his subjects. They killed off Trump's ability to reach out to millions of people directly. 

In no court of law is a non-doer guilty of an act, even if the act is committed in his name. In suspending his accounts, both FB and Twitter made a clear, and bold political statement - We decided who gets to see your content, we decide whether to hold you responsible. There is no court, no judge, no jury, and no rule of law. We are the law. 

Conclusion

Big Tech is declaring, that they are Big Brother and there is no concept of privacy. No one, not even elected governments, can stop them - now, or ever. 

 Is not the policy itself. It is the confidence to say that you have no right to privacy. And we're not answerable to anyone.  Even if the Govt of India tries to pass a citizen privacy law now,  they can't touch FB "group" because all citizens of India have given their 'voluntary consent' to such use of this data.

This is the new Silva mind control method.  And the big tech proclaiming 2 things : 

1. We have your data and we can use it to decide what you see,  how often you see it, and what you cannot see.  If that leads to greater radicalisation and the resultant loneliness,  so much the better.  

2. We are above the law.  Even if our algos feed hate to an entire country,  NO ONE can hold us responsible.  

If we think that political power is not in the agenda of a business house,  we're deluding ourselves.  Political power is straight vertical integration for a business.

Users moving away en masse is the population saying,  "Not yet." In terms of data exchange,  they were doing all this and more,  as the Congressional hearing in 2020 proved and the Singapore hearing before that, and the EU hearings. But at those times,  a govt could summon them and ask these questions about citizen privacy.

At this time, sure, you can leave. But at some time in the future, imagine trying to leave Facebook and your personal photos accidentally making it to the Dark Web because FB will publicly release all information that is not protected or expressly taken down by the user for more than 6 months. From there, how they went to the Dark Web is not Facebook's problem. Its their Terms of Use. :)