Notes ScratchPad...
Is passive investing quite possibly the second coming of portfolio insurance, with a larger-than-1987 Black Monday on the horizon?; & MOAR... A scratchpad...
i really do not understand the numbers beyond OpenAI's pitch to investors; & MOAR... A scratchpad...
Financial Economics: As I understand Michael’s big point here, it is that the rise of passive investors and investments has two big effects. The first—call in the David Einhorn effect—is that: (1) the rise of passive means increasingly smaller amounts of relative cash doing anything like real “securities analysis” of any form; (2) the Grossman-Stiglitz rewards will increase for patient investors willing to do fundamental analysis on corporate prospects; (3) the Grossman-Stiglitz rewards will increase for less-patient investors with an edge in figuring out what average opinion will expect average opinion to be in the near future; (4) so those left doing active will have an easier time exploiting noise traders, especially because passive investment’s reliance on dumb beta means that their investments track and amplify noise trading. The second—the Michael Green effect—is that at the level of the market passive is not passive: passive is buy when outside money mechanically flows in and sell mechanically when money flows out: (1) to the extent that this is truly secular and is never going to reverse itself, this means that price-earnings ratios are on a secular rise; (2) this makes equity returns anomalously high, for a while; (3) but eventually that anomaly will end when the passive share reaches homeostasis; (4) to the extent that this is in any way trend-chasing: (5 watch out!; (6) passive is then the second coming of LOR portfolio insurance; (6) and it ends in tears, with a “large left tail return event” followed by a system reset: (7) HENCE THE PAST HISTORY OF AGGREGATE INDEX VOLATILITY THAT GENERATES THE TRULY ASTONISHING SHARP RATIOS OF PATIENT DIVERSIFIED AMEERICAN EQUITIES IS POSITIVELY MISLEADING AS A GUIDE TO FUTURE RESULTS; and (8) because BlackRock and Vanguard and company have such active lobbying arms, regulators cannot deal with this in time. The first is right. What do I think of this second? I think: hmmmmm…:
Michael Green: Control Theory—or "Proof of a System Is What It Does" (POSIWID): ‘As long as flows into passive continue, US equities via index funds will offer anomalously high returns driven by flows, not fundamentals. We’ve never been here before and I can’t offer a guarantee of the future, and certainly the knowledge that the “man behind the curtain” is a fraud can lead to overly-cautious investment frameworks as your neighbor “gets rich.” I’ve spoken publicly on this tension and continue to re-emphasize it. The mechanics of the system speak to a violent unwind. For me, that emphasizes preserving wealth rather than chasing it… <https://www.yesigiveafig.com/p/control-theory-or-proof-of-a-system>
Journamalism: After 2016 and Trump’s election victory, Dean Baquet gave a mealy-mouthed claim that nobody “had their arms wrapped around the mood of the country that allowed for the election of Donald Trump… thought that Donald Trump was going to be elected President. Anybody who says they did, I don’t buy it. If I had to do that over again, oh, my God… But I would have covered the country a lot differently in the months leading up to the election of Donald Trump”. What is the New York Times’s excuse today?:
Kevin Kruse: Historians: [Trump is] a fascist. Political scientists: He’s a fascist. His own aides: He’s a fascist. The NYT: He shows a wistful longing for a bygone era of global politics…
MAMLMs: I confess I really do not understand the pitch OpenAI is making to investors. It is: (1) “We are going to spend huge amounts of money training our future ChatBots to become hyperintelligent, and they will be so much smarter and more useful than anyone else’s that vast numbers of money trucks will back up to our loading dock”. The right pitch would seem to me to be, instead: (2) “We are the central place for ChatBots; that gives us a huge edge to building out a consumer-software company that will be for ChatBots what Google was for ten-blue-links search”. But that pitch (2) is not the pitch that they are making.
And the scale of spending on training they are envisioning undertaking! And that is not a one-off creation of a superior technological capability. Instead, it is a steady-state money outflow leak on the road to AGI. Expectations of hyperprofitability seem to me to require a SuperPangloss view of the likely use of ChatBots and of the ease with which competitors—cough, Apple; cough, Google; cough, Facebook; cough, SalesForce; cough, Microsoft—can and will build much cheaper alternatives with good-enough verbal felicity and much better hooks into both users’ conceptual maps of the world the reliable ground-truth data to which users want natural-language interface access. To my nose, OpenAI’s prospects for becoming a hyperprofitable moat-protected internet money machine for investors (as opposed to a source of use-value and income for stakeholders, and of technological advance) smell like JDS Uniphase’s prospects back in 1999:
Matt Siegler: OpenAI Must Scale a Massive Money Mountain: ‘Cory Weinberg of The Information has far more granular detail on how OpenAI is projecting their new several years to play out from a business perspective. And they're... pretty wild! We already knew the big goal of reaching $100B in revenue by 2029 (and my walk up to that number—with the help of ChatGPT, naturally—ended up being pretty close). But ChatGPT itself would go from a projected $3B business this year to a nearly $40B business in 2028. That would make it roughly the same size as the Mac business is today for Apple. Bigger than all of Qualcomm, Starbucks, and Uber. In 2029, they're projecting $55B. That would put ChatGPT ahead of where Oracle, Nike, and Intel are today. Just ChatGPT….
On the flip side, the projected losses to get there will be massive: “Before it gets to that point, losses could rise as high as $14 billion in 2026, nearly triple this year’s expected loss, according to an analysis of data contained in OpenAI financial documents viewed by The Information. This estimate doesn’t include stock compensation, which is one of OpenAI’s biggest expenses, although not one it pays in cash.” That $14B loss is largely because model training cost are projected to get close to $10B that year.
And the compute costs grow from there.
The good news, I guess, is that those costs should be tied to revenue growth… <https://spyglass.org/openai-magic-money-mountain/>
Cyber Grifters: One of the (very, very few) things wrong with Ben Thompson Thought is that he leans too far in presuming that it is, on balance, good when wannabe software moguls grab your eyeballs and rent them out to people who will try to hack your brain to get you to do things you may well later regret. In his view, any friction in this process is an offensive theft from the productive and the worthy—and Ben tend to drift too far in the direction of thinking that it is Apple Computer and its ATT and other privacy initiatives that is the real villain here. That is, I think, a very screwy way of looking at the world.
Now we find that the New York Times’s Kevin Roose thinks the same, and has turned the dial on it up to 11.
John Gruber objects:
John Gruber: Consider the Plight of the VC-Backed Privacy Burglars: ‘Kevin Roose wrote a column for The New York Times last week under the headline “Did Apple Just Kill Social Apps?” <nytimes.com/2024/10/02/technology/apple…>, about which Jason Snell quipped <sixcolors.com/link/2024/10/did-godzilla…, “It’s rare that a story is worse than its provocative headline, but this one manages it.” The gist of it is Roose positing that Apple’s new fine-grained controls over contact-sharing in iOS 18 are somehow controversial… [because] urgeoning social networks have, over the last 15 years, used that all-or-nothing access to users’ contacts to great effect building out their social graphs….
Nick Heer wrote a splendid response to Roose’s piece at Pixel Envy — “I Do Not Care About Impediments to a Creepy Growth Hacking Technique” <pxlnv.com/blog/growth-hack> …. This… sums up my first thought: “The surprise is not that Apple is allowing more granular contacts access, it is that it has taken this long for the company to do so. Developers big and small have abused this feature to a shocking degree…”.
My other thought is that new restrictions are inevitably resented by those who were abusing the newly-restricted resource…. The question to ask is, “Is this what users want and expect?” Sometimes it really is that simple. I’m not sure it’s ever worth asking “Is this what growth-hacking VC-backed social-media app makers want?”… <daringfireball.net/2024/10/consider_the…>
Journamalism: More polling bullshit from the Wall Street Journal’s news pages. This time from Ben Pershing, who knows better:
Ben Pershing: ‘A new WSJ poll shows that the fight for the swing states is essentially tied, though Donald Trump has an edge on top issues. The survey finds Kamala Harris with slim leads in Arizona, Michigan, Wisconsin and Georgia; Trump has a narrow edge in Nevada, North Carolina and Pennsylvania. But no lead is greater than 2 percentage points <wsjpoliticspolicy.createsend1.com/t/d-l…, except for Trump’s 5-point advantage in Nevada, which like the others is within the poll’s margin of error… <wsjpoliticspolicy.createsend1.com/t/d-e…>
Look at all seven of the “swing states”—the ones that would, given relative strength, get Harris from 226 to 319 electoral votes with a uniform swing of 2.7%-points in the edge. Nevada and Michigan are (probably) the most Democratic; North Carolina and Arizona are (probably) the most Republican; Wisconsin, Pennsylvania, and Georgia are in the middle (probably). Everything Pershing says about the relative position of those seven states is meaningless statistical noise.
And stop saying “the survey finds Kamala Harris with slim leads…” It doesn’t find anything. The survey makes a very noisy guess.
Journamalism: I am definitely with Kai Bird and company on this one. And—surprise, surprise—New York Times reporter William J. Broad seems incompetent. If you joined the Communist Party, you got a party membership card and you paid party dues. It was a lot like joining the American Automobile Association. If I don’t have an AAA card and have not paid my AAA dues, I am not a member. They will not tow my car.
Broad should have stepped away from his keyboard and then never filed his story after he came to this:
William J. Broad: An Old Clash Heats Up Over Oppenheimer’s Red Ties: ‘Dr. Sakmyster… added that the renunciation of cards and regular dues raised basic questions for him…. “It’s a difficulty,” Dr. Sakmyster said, in determining what defined a Communist back then…. Dr. Griffiths… memoir…. “Nobody carried a party card. If payment of dues was the only test of membership, I could not testify that Oppenheimer was a member.” Overall, proponents of the middle path see Oppenheimer as simultaneously being and not being a real Communist... <nytimes.com/2024/10/08/science/oppenhei…>
We have words to de-muddle this “muddle” and to make distinctions. The words are: spy, communist, fellow traveler, agent of influence. Oppenheimer was a fellow traveler. Yet Broad is unwilling to use these words that we have.
Just your weekly reminder that the current generation of self-styled "AIs" (large-scale correlation models) are based on colossally-scaled theft of the work and intellectual property of others with zero remuneration or even acknowledgement.