Book update: I'm getting close to completing the first draft. (Yay!) A couple of stubborn chapters on trust (as a component in storytelling) remain. Needs some untangling of the multiple threads. Fun problem to work on though! By the way, we had a lovely call yesterday with some of the newsletter readers on the process I follow etc. We are having another call today (1-Feb) afternoon between 3-4 pm (for those who preferred Sat instead of Friday). In case you are interested, please drop me an email and I'll add you to the invite. And now, on to the newsletter. Welcome to the hundred and first edition of '3-2-1 by Story Rules'. A newsletter recommending good examples of storytelling across:
Let's dive in. 𝕏 3 Tweets of the weekIf you haven't been living under a rock, you might have been inundated with news about Deepseek, the revolutionary new Chinese Gen-AI tool, which has taken the world by storm and knocked off $1Tn in global market capitalisation (some of it recovred, I believe). This tweet, by senior economist Oliver Blanchard gives a striking frame to the development: The largest positive total factor productivity (TFP) shock in world history. (Essentially saying that the steep drop in computing power needed will enable more tasks to be done with lesser resources) In response to comments, he shares a more nuanced version of the statement in his reply tweet. How do you frame the news of a massive productivity improvement in AI? To assuage jittery Microsoft investors worried about pricing implications for their AI products, Satya Nadella framed it as a positive development - given the massive potential demand increase due to lower compute costs (and prices). Sajith Pai then frames the framing (!) used by Nadella using the concept of two types of stories:
It's an interesting tweet. Do click the link and read the text in the attached images. After two intense tweets, I thought we could enjoy this lovely picture and intriguing description :) 📄 2 Articles of the weeka. 'OpenAI Doesn’t Want AI Cheaters' by Matt Levine DeepSeek has taken the world by storm. For explainers about the tech implications, you can read this post by Ben Thompson or this thread by Yishan, where he calls this the Google moment of AI, or this one by Andrew Ng on the implications. But I wanted to showcase Matt Levine's take on this development. He mentions that some folks in the US are concerned about how DeepSeek seems to have distilled (fancy word for copied) stuff from Open AI: David Sacks, President Donald Trump’s artificial intelligence czar, said Tuesday there’s “substantial evidence” that DeepSeek leaned on the output of OpenAI’s models to help develop its own technology. In an interview with Fox News, Sacks described a technique called distillation whereby one AI model uses the outputs of another for training purposes to develop similar capabilities. Levine then shares some context of how OpenAI was hit with similar allegations - of having And so I have a lot of sympathy for publishers and writers who say: “Look, our words are on the internet. It seems that OpenAI and other artificial intelligence companies trained their large language models on a corpus of text that includes our words. Effectively what they are doing is remixing our content for their own commercial purposes. This has the potential to destroy our livelihood — if people go to an AI chatbot for information, instead of to a newspaper or a financial columnist — and has also made the AI people extremely wealthy. They should have to pay us!” The counter by OpenAI was that this is what even human Intelligence does - takes several inputs from external sources and creates something new with those inputs: My writing style, similarly, is influenced (consciously and unconsciously) by other writing that I read. The way I write is that there is a network of neurons in my brain that takes inputs (words I read, etc.) and produces outputs (words I type). I do not pay royalties to all the people whose work I read, or ask them for permission to think about their ideas. That’s just the way discourse works. And so I have a lot of sympathy for AI companies who say: “Look, we have trained a sort of artificial brain to read all the words in the world, think about them, and then produce its own writing in response to questions. Our artificial brain has ideas that it expresses in language, and those ideas and that language are influenced by the words that it has read, but that’s true of all writing. If we were directly plagiarizing other people’s work, that would be bad, but we’re not. We are just influenced by their work, and you can’t sue us for that.” Back to DeepSeek. Levine can't contain his mirth when he comments on how OpenAI seems to be upset with DeepSeek for possibly having done something that *cough* it may not be entirely innocent of either: Ahahahahaha I can’t believe that they (DeepSeek) used OpenAI’s work to train their AI model! How rude! Bonus: Check out this video by the hilarious Danish Sait - on how Natural Intelligence can sometimes be better than AI! b. 'The Origins of Wokeness' by Paul Graham This piece by Paul Graham offers a fascinating theory on the origins of wokeness. Now, I know this is a controversial topic. But I would urge you to adopt a curious lens and read the piece for a point of view of a respected tech-entrepreneur and thinker. Graham starts by giving some historical context to the idea of 'being a prig': There's a certain kind of person who's attracted to a shallow, exacting kind of moral purity, and who demonstrates his purity by attacking anyone who breaks the rules. Every society has these people. All that changes is the rules they enforce. In Victorian England it was Christian virtue. In Stalin's Russia it was orthodox Marxism-Leninism. For the woke, it's social justice. Graham defines wokeness and explains that he has an issue with the performative nature of it, not the ends of social justice: I've often been asked to define both wokeness and political correctness by people who think they're meaningless labels, so I will. They both have the same definition:
'An aggressively performative focus on social justice.'
In other words, it's people being prigs about social justice. And that's the real problem — the performativeness, not the social justice.
Graham asks - why did the woke movement begin in universities, and specifically in the humanities and social sciences (and not in the hard sciences) and that too in the 1980s. His explanatory theory is fascinating: A successful theory of the origin of political correctness has to be able to explain why it didn't happen earlier. Why didn't it happen during the protest movements of the 1960s, for example? They were concerned with much the same issues.
The reason the student protests of the 1960s didn't lead to political correctness was precisely that — they were student movements. They didn't have any real power. The students may have been talking a lot about women's liberation and black power, but it was not what they were being taught in their classes. Not yet.
But in the early 1970s the student protestors of the 1960s began to finish their dissertations and get hired as professors. At first they were neither powerful nor numerous. But as more of their peers joined them and the previous generation of professors started to retire, they gradually became both.
The reason political correctness began in the humanities and social sciences was that these fields offered more scope for the injection of politics. A 1960s radical who got a job as a physics professor could still attend protests, but his political beliefs wouldn't affect his work. Whereas research in sociology and modern literature can be made as political as you like.
In the rest of the piece, he delves into other questions such as: Why the social justice topics and not others? Why did the movement have a higher share of women? Why did it become so strict on those it perceived as the oppressors? And what happened in the 2010s when the movement seemed unstoppable? Graham concludes that heresy is a strong tool and should only be used in the rarest of circumstances. And the burden of proof (that any speech is harmful) should be on those who want to ban something, not those making it: We should have a conscious bias against defining new forms of heresy. Whenever anyone tries to ban saying something that we'd previously been able to say, our initial assumption should be that they're wrong. Only our initial assumption of course. If they can prove we should stop saying it, then we should. But the burden of proof is on them. In liberal democracies, people trying to prevent something from being said will usually claim they're not merely engaging in censorship, but trying to prevent some form of "harm". And maybe they're right. But once again, the burden of proof is on them. It's not enough to claim harm; they have to prove it. 🎧 1 long-form listen of the weeka. 'Marc Andreessen: It’s Morning Again In America' on the Uncommon Knowledge podcast This conversation is on similar lines as the Paul Graham piece - but the focus is more on the tech and business implications. In the episode, Peter Robinson (the interviewer) speaks with Marc Andreesen (co-founder of VC firm a16z) about what made Big Tech leave the Democratic party camp and move almost en masse to the Republican side. Robinson sets context for the listeners by sharing that Andreessen is now formally into the political sphere: Since the election last November, Mr. Andreessen has been spending only half his time here in Silicon Valley, spending the other half at Mar a Lago. Where he has been advising Donald Trump and his friends, Elon Musk and Vivek Ramaswamy. This is the core question asked by the episode: Peter Robinson: The presidential candidates Marc Andreessen has supported... Bill Clinton, Al Gore, John Kerry, Barack Obama, Hillary Clinton, and Donald Trump.
...
Peter Robinson: Okay, so from a very loyal Democrat to a MAGA Republican, how come? What happened?
What happened was that the US government went from pro-Big Tech to completely anti (and especially in the last 4 years). Andreessen traces the beginning of the trend to 2013 and echoes a similar point to Paul Graham - that it was college kids who entered the workforce with a clear agenda that turned anti-tech: Basically, it was in 2013, is really when the employees started to activate. And so you started to get, basically, this employee activist movement. And that was a big thing. And of course, in retrospect, I know what happened, which is, you had a generation of radicalized college kids. So basically, if you back up further, it was basically some combination of 9/11, the Patriot Act and then the Iraq War and then the global financial crisis and then Occupy Wall Street.
Whatever happened during that sort of ten year period, radicalized the college kids before they moved into industry. But then they showed up and they were starting to populate these companies in 2012, 2013, 2014. And then they then activated a lot of their older contemporaries in those companies, the older cohort members who wanted to be cool and with it.
One of the implications was speech censorship, especially on social media platforms And the very specific thing that happened with the social media companies is that, that was the beginning of the whole thing with hate speech and misinformation. So that was the whole thing, becasue up until then, the Internet is a wild west. We love it because because it's a wild west. It makes all these things possible. It's fantastic, it's great, it's working. It's pro-democracy, it's pro-free speech. The Obama State Department had this giant push to expand free speech, right, all through the rest of the world, right? At the same time that they discovered it would be a really good idea to censor American speech right, on social media, right? The discussion then moves on to the 'government expenditure reduction' agenda that Elon Musk has taken on with the Department of Government Efficiency (or DOGE). Robinson and Andreessen give a brief history of previous attempts in this sphere. Robinson in particular is sceptical of the initiative, because of past history - even on the Republican side (emphasis mine):
Robinson: When Ronald Reagan ran in 1979, he called for the abolition of the Department of Education, which, by the way, only got up and running in 1979.
So it's not as if this thing was some storied inheritance from the gloried past in American history. It was a brand new federal bureaucracy. And when Ronald Reagan took office, Ed Meese, who is the person who told me this story, went up to Capitol Hill and encountered one Republican senator after another who said, you can't do that.
In just a year, they had already figured out how to use that cabinet department to give benefits to their constituents. And now they wanted to protect it, not eliminate it. All of which is to say, what do you boys think you can actually get done at DOGE? High spirits, huge intelligence is about to smack into practical politics.
The difference now? One crucial element is the Elon factor: Peter Robinson: So what you boys are up to now has been attempted, although I must say, not with quite this level of bravado.
Marc Andreessen: And not with Elon.
Peter Robinson: And not with Elon.
Andreessen shares some shocking facts about government employees and how some of them have struck agreements to come to work just one day a month: Marc Andreessen: ...just sort of fun fact which is, what's the occupancy rate of federal buildings in Washington DC right now by people working in the office?
Peter Robinson: Something like 25%, isn't it?
Marc Andreessen: It's like 25% on average.
Peter Robinson: Lower than San Francisco.
Marc Andreessen: Yes, it's basically that the Washington DC Federal Building Complex is basically a ghost town. The security agencies are still full time, the other agencies are not. In the extreme cases, you have certain agencies that are all the way down to a day, a month. And this is true, there were collective bargain, and some of the agents, this is the other thing is that some of the agencies employees are unionized at the federal level. And there were collective bargaining agreements struck in some of these agencies where they literally, during COVID got the right to never come back to work. And one of the ones that I'm aware of, an agency I know well, they literally come back to work a day a month. And so what the employees do is they come back a day, a month but they pair the days. And so they come back for two days every two months.
Meanwhile, he worries that China is getting far ahead of the US in many critical industries: And then look, the China thing, the problem is compounding....basically, there's three industries that sort of follow phones that are kicking in right now.
So, one is drones. And it's sort of in a bizarre turn of events, the Chinese basically own the global drone market for all, basically, the consumer drones, all the cheap drones. Which by the way, numerically then are the drones that all the militarys also use in overwhelming numbers. And something over 90% of all drones used by the US military are made in China.... So the drone thing is not just a company, it's an entire ecosystem. It's all of the componentry.
(Two, Electric and Self driving Cars) So they now have their version of what the Germans used to have, which is sort of, the thousands of mid market companies that make all the parts that go into a car. But the German ecosystem is still making them for old internal combustion cars, the Chinese ecosystem is making them for electric cars and self driving cars. And of course, that means the new Chinese cars that are coming out are really good and they have a giant advantage on cost. And they are starting to bring to market cars that are equivalent in quality to western cars at a third or a fourth of price. So that's coming.
(Three, robots) And then the big one that follows phones, drones, and cars, logically, is robots.
An example, you've seen these videos of the Boston Dynamics (a US Company) robot dog. (And...) they're not a very aggressive company going to market, those are like $50,000 products. You can go buy one.
China has an equivalent product that, by the way, looks extremely similar and behaves extremely similar, and has, among its capabilities, it can climb stairs, it can do back flips. It can stand on its hind legs. It can climb and descend inclines. You can put wheels on it. It can shoot at 30 miles an hour. It can lock the wheels and climb stairs with the wheels on. By the way, it also is hooked into a large language model, so it talks to you in a very nice, a very plummy voice. It will teach you quantum physics. Full voice control. Price point, $1,500, right?
Andreessen and others believe that a Republican administration under Trump will give a better chance for the US to respond to these issues. Whatever happens, it'll make for a great story. That's all from this week's edition. Ravi PS: If you found this thought-provoking or useful, please consider forwarding it to a friend or colleague. And if you got this email as a forward, you can get your own copy here. Access this email on a browser or share this email on WhatsApp, LinkedIn, or Twitter. You can access the archive of previous newsletter posts here. You are getting this email as a part of the 3-2-1 by Story Rules Newsletter. To get your own copy, sign up here. |
A Storytelling Coach More details here: https://www.linkedin.com/in/ravishankar-iyer/
It's #100 :) 🎉 🎊 🍻 Milestones matter right? I mean it's still the same ol' newsletter, with the same ol' recommendations. But a century feels special. I did consider taking a break from the newsletter but decided against it. It's a useful forcing function for me to be disciplined about reading and writing. And it helps me remain connected with all of you wonderful readers! Here's one idea I had though, to commemorate this milestone (if you feel it would be useful): Do a Zoom call talking...
Whoa, it's number 99. The big 1-0-0 is looming on the horizon. I'm thinking of a special announcement for the milestone. Stay tuned! Last week progress on the book was better. I am writing the last couple of chapters and in parallel have started doing a complete read-through and edit of the existing chapters. It's a fun process! And now, on to the newsletter. (which without any conscious plan, has become an Econ-focused issue!) Welcome to the ninety-ninth edition of '3-2-1 by Story Rules'. A...
Boy, this back-to-work week has been tough, after a fun break! I would have loved to be able to make some good progress on the book, but could not because of a critical project which took up more time than expected. Hopefully next week is better. And now, on to the newsletter. Welcome to the ninety-eighth edition of '3-2-1 by Story Rules'. A newsletter recommending good examples of storytelling across: 3 tweets 2 articles, and 1 long-form content piece Let's dive in. 𝕏 3 Tweets of the week...