I signed up to write an 800-word review of Dan Davies’s brand-new "The Unaccountability Machine". The problem is that what I now have is more than 5000 words. And the publication it was for has...
It is just too funny that you write "How is it a little book? Damned if I know. Had I set out to write anything like book, I could not have done so in less than four times the length." and your 800 word review runs to 5,000 words. Checks out!
So using Brad;s "Slouching..." as a comparison, that suggests the book is about 390 pages. Not "small" and still rather chunky, depending on the typeface used.
Before being rescued, I'd personally like to experience some "Neoliberalism." On paper the idea of using markets (and quasi-market mechanisms like Pigou taxes and subsidies) to achieve socially beneficial ends looks pretty good.
What a shame that before it could even be tried, the rhetoric was hijacked by those who wanted to achieve socially harmful ends (income transfer to the wealthy) and didn't mind misusing markets to do so and "Progressives" decided the problem was "use markets."
I'm sensing a tension here between your praise for Davies, and by extension Farrell, and the recent DeLong/Smith debate with Farrell concerning the notion of power in Acemoglu and Johnson. Surely what you called the economists non-standard use of the term power, is part and parcel of Davies' concerns about the economists perspective.
I think Davies saw investor relations, where CEO's apparently weren't receiving good internal advice, or were following bad advice from equity analysts. From what I've seen, CEO's get a variety of perspectives and they don't know who to believe, particularly if they never learned the industry (see everyone from GE). But that doesn't matter so much because a) CEO's were chosen for confidence and sales rather than independent thinking; b) they will always choose to copy their industry colleagues for safety; and c) if given the choice between internal operational advice, or advice on how to raise the stock price, they will go with the equity price recommendation 9 out of 10 times. This creates chaos for executives down the line, so that only subservient sycophants remain.
Maybe we start with fixing incentives. For example, a business can buyback stock, or it can have executive stock incentive compensation - but it can't have both. Also, executive incentives aren't in just stock, but in a representative slice of enterprise value that includes slices of debt that cannot be sold for ten years. And for God's sake, severely limit political campaign contributions so that politicians spend most of their time governing rather than asking for money or grandstanding to stimulate donations.
Lastly, we need to balance the power of the state, the corporation, and the individual. Those who want to invest all power in just one of these will end up with communism, fascism, or anarchy. Since both states and corporations have the power of legions of individuals, can't be jailed, and have an indefinite life, we must make sure their power is shared, balanced, and limited. How we best do that is damned hard, and defies blaming labels.
“problems is a matter not for economists who focus on eliminating market failures, but of rather supervising them in a way that ensures that the internal flow of information between deciders and decided-upon is kept in balance so that they become and remain viable systems that are useful to humanity.”
That is to say, apply the same economic concepts to internal flows as external flows. Sounds good to me. Has no one ever thought of this before?
“Our current world is beset by accountability sinks.”
What are the incentives that create these?
“profound societal impacts of the 2008 financial crisis.”
Woah! This does not disqualify Davies; lots of people make this mistake, but is a strike against him. The _2008 financial crisis_ did not have "propound societal impacts." The Fed's and ECB's inadequate, almost perverse, _response_ to a negative demand shock had "profound societal impacts." True, that the shareholders of the institutions that caused the crisis were sheltered in an “accountability sink” was a very bad thing, but THAT did not have societal impacts.
Now if Davies thinks Black Wednesday was damaging to the British economy, he is both wrong and inconsistent. Every British household should have a little icon of St. George of Soros for having slain UK participation in the proto-Euro. Thanks to Soros the market got it right over the misguided attempt of the Exchequer to override the potentially "self-correcting decisions of “private sector firms’ and households’ investment and spending decisions.” Strike two.
Apparently Davis has never run across the idea of Neoliberalism, [Milton Freidman, for all his virtues, was not a Neoliberal.] which very explicitly is NOT a “system which is set up to maximise a single objective.”
Strike three.
OK, maybe we are playing hockey not baseball and these are just three missed goals, so, onward.
“I hope this book will spur the thinking that we actually need to do.”
Hope is a Christian virtue, but Davies book would certainly spur better thinking if it were not – on the evidence so far adduced – arguing against a straw man.
Refer Adrian Tchaikovsky’s novel “Service Model” for the fictional representation of these arguments taken to a logical conclusion but also neatly illustrating the breakdown and subversion of the nested systems to unhappy ends.
I have Stafford Beer's "The Heart of Enterprise". As with his other works, his idea is to recursively nest a 5 component control system. The control system relies on Ashby's Law of "Requisite Variety". One component does connect to the environment, and at the top level it does survey the "world"
I see this reflected in Stephen Baxter's Cli-Fi novel, "Flood" with protagonist Nathan Lammockson who is always looking for the next opportunity to profit as the Earth becomes a hycean world.
It would be much better if we could remake the world so that our activities aligned more with the needs of the biosphere rather than our needs, or more accurately, with those who want to be atop their desired social pyramid - a feature of our primate ancestry, and arguably our selfish genes.
Cybernetics became sidelined, and machine intelligence quickly changed direction, culminating in where we are today with humans increasingly outside the loop. If this blog post [ https://situational-awareness.ai/ ] is even half correct, it takes Ray Kurzweil's simplistic trendline prediction for AGI and extends it, indicating we may have superintelligence within a decade or two. Yikes. The Alignment Problem is about trying to ensure these AIs do what we humans want and do not become unstoppable "paperclip maximizers", ie what our Anglosphere-inspired corporations are doing in the name of "profit maximization or maximized market value". Good luck with that. Human alignment is no "biosphere alignment", and unless cybernetic management can build this in, which I doubt, then management cybernetics is a fool's errand.
Cybernetician Norbert Wiener also wrote "The Human Use of Human Beings - Cybernetics and Society" where he was aware of some of the issues of machines. But as we have seen ever since the issue of the environment was brought to our attention, our societies have favored growth above all else. So the destruction of the biosphere continues. Substituting machines and systems and different opaque management systems won't change anything. Human nature isn't going to change fundamentally. History demonstrates that - from small tribes to our global technological civilization. If anything, the further divorced we are from "nature", the less we consider its importance. We may be able to build in more "accountability", but what needs to change is the purposes and means of what we do. So far what I see is an emphasis on "playing the angles" to achieve personal goals of wealth and power, and to hell with the consequences for everyone and everything else. I don't recall Beer considering those issues, and I am unclear whether this blog entry does either.
What an incredible lapse in editorial judgement by that benighted publication. You discretely do not name it. Had you and if I was a subscriber, I would immediately cancel. Have you read Stiglitz, The Road to Freedom? I recently finished it. It lays out the paradox of freedom in an economy faced with scarcity: Increasing one individual's freedom may reduced others freedom. It might benefit from the Delong treatment in a review.
I think this gives some insight into why cybernetics was a dead end. There is almost always an unspoken teleological assumption in cybernetics, the idea that there is some kind of metric to be optimized. The problem with most real systems is that there are multiple entities with multiple goals. The real estate crash earlier this century may have been a disaster for many, but a lot of people made a lot of money out of it, and others did extremely well picking over the carcasses.
They tell anyone involved in a startup to set up a spreadsheet of the various players and the options they hold. In theory, everyone wants the company to do really well, but different parties would benefit more than others from different buyout, shutdown or IPO conditions. A cybernetic framework would simply seek some optimal overall corporate performance, but anyone looking at such a spreadsheet would quickly realize that there are lots of optimums and that one must play one's hand wisely.
I would disagree with you characterization of cybernetics optimizing on a single "best outcome". I also have "Ross Ashby's "Design for a Brain" which is the use of cybernetics to ensure control of many different features of a brain. The key ideas are to prevent unstable conditions, the very unstable situations that can come from optimizing on any "optimum peak". I believe you can trace cybernetic ideas to as late as Rodney Brooks' subsumption architecture of his robot insects, like Atila, and may still survive in the Roomba vacuum cleaner albeit in software rather than hardware,.
That's interesting. In college I took a systems theory course that was all about managing complex systems and controlling instability, but cybernetics was never mentioned. Perhaps it is one of those seminal fields like artificial intelligence where whenever something is found useful, it gets its own name and livery.
I have Donella Meadows' "Thinking in Systems". I agree that cybernetics AI, and systems theory all seem to have gone into their own silos yet there are clear commonalities. I had relatively recently read Wieners "Cybernetics" to realize a number of his discoveries have apparently been "rediscovered" by AI practioners. I suspect that the divergences happen because of the domains people study. In biology, if someone publishes a groundbreaking technique in Science/Nature, it can create a new direction in research in that narrowe field, whilst another related field continues with a different technology and the fields diverge.
Cybernetics seems to have almost died, with the field reading like a special interest group in old technology. Computer "science" is still growing, and I suspect will encompass systems theory as a subfield of algorithms (if it hasn;y already).
Isn't it a truism that reseacchers often don't know what their neighboring lab is working on, even if there could be fertile collaboration between them, even though one reads articles about successfull collaborations that bear fruit.
Once more I must say "this" is why I come here. Thanks again.
It is just too funny that you write "How is it a little book? Damned if I know. Had I set out to write anything like book, I could not have done so in less than four times the length." and your 800 word review runs to 5,000 words. Checks out!
Interestingly, the Amazon page shows no page length in teh details, and the contents page has no page numbers either.
It has the weight and dimensions though, which honestly don't sound all that small!
Item weight : 1.05 kg
Dimensions : 16.2 x 3.4 x 23.8 cm
So using Brad;s "Slouching..." as a comparison, that suggests the book is about 390 pages. Not "small" and still rather chunky, depending on the typeface used.
Before being rescued, I'd personally like to experience some "Neoliberalism." On paper the idea of using markets (and quasi-market mechanisms like Pigou taxes and subsidies) to achieve socially beneficial ends looks pretty good.
What a shame that before it could even be tried, the rhetoric was hijacked by those who wanted to achieve socially harmful ends (income transfer to the wealthy) and didn't mind misusing markets to do so and "Progressives" decided the problem was "use markets."
Aren’t you complaining about a cybernetic problem?
I'm sensing a tension here between your praise for Davies, and by extension Farrell, and the recent DeLong/Smith debate with Farrell concerning the notion of power in Acemoglu and Johnson. Surely what you called the economists non-standard use of the term power, is part and parcel of Davies' concerns about the economists perspective.
I think Davies saw investor relations, where CEO's apparently weren't receiving good internal advice, or were following bad advice from equity analysts. From what I've seen, CEO's get a variety of perspectives and they don't know who to believe, particularly if they never learned the industry (see everyone from GE). But that doesn't matter so much because a) CEO's were chosen for confidence and sales rather than independent thinking; b) they will always choose to copy their industry colleagues for safety; and c) if given the choice between internal operational advice, or advice on how to raise the stock price, they will go with the equity price recommendation 9 out of 10 times. This creates chaos for executives down the line, so that only subservient sycophants remain.
Maybe we start with fixing incentives. For example, a business can buyback stock, or it can have executive stock incentive compensation - but it can't have both. Also, executive incentives aren't in just stock, but in a representative slice of enterprise value that includes slices of debt that cannot be sold for ten years. And for God's sake, severely limit political campaign contributions so that politicians spend most of their time governing rather than asking for money or grandstanding to stimulate donations.
Lastly, we need to balance the power of the state, the corporation, and the individual. Those who want to invest all power in just one of these will end up with communism, fascism, or anarchy. Since both states and corporations have the power of legions of individuals, can't be jailed, and have an indefinite life, we must make sure their power is shared, balanced, and limited. How we best do that is damned hard, and defies blaming labels.
Your last para is the theme and recommendation of Martin Wolf's "The Crisis of Democratic Capitalism"
The principles seem pretty bland.
“problems is a matter not for economists who focus on eliminating market failures, but of rather supervising them in a way that ensures that the internal flow of information between deciders and decided-upon is kept in balance so that they become and remain viable systems that are useful to humanity.”
That is to say, apply the same economic concepts to internal flows as external flows. Sounds good to me. Has no one ever thought of this before?
“Our current world is beset by accountability sinks.”
What are the incentives that create these?
“profound societal impacts of the 2008 financial crisis.”
Woah! This does not disqualify Davies; lots of people make this mistake, but is a strike against him. The _2008 financial crisis_ did not have "propound societal impacts." The Fed's and ECB's inadequate, almost perverse, _response_ to a negative demand shock had "profound societal impacts." True, that the shareholders of the institutions that caused the crisis were sheltered in an “accountability sink” was a very bad thing, but THAT did not have societal impacts.
Now if Davies thinks Black Wednesday was damaging to the British economy, he is both wrong and inconsistent. Every British household should have a little icon of St. George of Soros for having slain UK participation in the proto-Euro. Thanks to Soros the market got it right over the misguided attempt of the Exchequer to override the potentially "self-correcting decisions of “private sector firms’ and households’ investment and spending decisions.” Strike two.
Apparently Davis has never run across the idea of Neoliberalism, [Milton Freidman, for all his virtues, was not a Neoliberal.] which very explicitly is NOT a “system which is set up to maximise a single objective.”
Strike three.
OK, maybe we are playing hockey not baseball and these are just three missed goals, so, onward.
“I hope this book will spur the thinking that we actually need to do.”
Hope is a Christian virtue, but Davies book would certainly spur better thinking if it were not – on the evidence so far adduced – arguing against a straw man.
Refer Adrian Tchaikovsky’s novel “Service Model” for the fictional representation of these arguments taken to a logical conclusion but also neatly illustrating the breakdown and subversion of the nested systems to unhappy ends.
I have Stafford Beer's "The Heart of Enterprise". As with his other works, his idea is to recursively nest a 5 component control system. The control system relies on Ashby's Law of "Requisite Variety". One component does connect to the environment, and at the top level it does survey the "world"
I see this reflected in Stephen Baxter's Cli-Fi novel, "Flood" with protagonist Nathan Lammockson who is always looking for the next opportunity to profit as the Earth becomes a hycean world.
It would be much better if we could remake the world so that our activities aligned more with the needs of the biosphere rather than our needs, or more accurately, with those who want to be atop their desired social pyramid - a feature of our primate ancestry, and arguably our selfish genes.
Cybernetics became sidelined, and machine intelligence quickly changed direction, culminating in where we are today with humans increasingly outside the loop. If this blog post [ https://situational-awareness.ai/ ] is even half correct, it takes Ray Kurzweil's simplistic trendline prediction for AGI and extends it, indicating we may have superintelligence within a decade or two. Yikes. The Alignment Problem is about trying to ensure these AIs do what we humans want and do not become unstoppable "paperclip maximizers", ie what our Anglosphere-inspired corporations are doing in the name of "profit maximization or maximized market value". Good luck with that. Human alignment is no "biosphere alignment", and unless cybernetic management can build this in, which I doubt, then management cybernetics is a fool's errand.
Cybernetician Norbert Wiener also wrote "The Human Use of Human Beings - Cybernetics and Society" where he was aware of some of the issues of machines. But as we have seen ever since the issue of the environment was brought to our attention, our societies have favored growth above all else. So the destruction of the biosphere continues. Substituting machines and systems and different opaque management systems won't change anything. Human nature isn't going to change fundamentally. History demonstrates that - from small tribes to our global technological civilization. If anything, the further divorced we are from "nature", the less we consider its importance. We may be able to build in more "accountability", but what needs to change is the purposes and means of what we do. So far what I see is an emphasis on "playing the angles" to achieve personal goals of wealth and power, and to hell with the consequences for everyone and everything else. I don't recall Beer considering those issues, and I am unclear whether this blog entry does either.
What an incredible lapse in editorial judgement by that benighted publication. You discretely do not name it. Had you and if I was a subscriber, I would immediately cancel. Have you read Stiglitz, The Road to Freedom? I recently finished it. It lays out the paradox of freedom in an economy faced with scarcity: Increasing one individual's freedom may reduced others freedom. It might benefit from the Delong treatment in a review.
I think this gives some insight into why cybernetics was a dead end. There is almost always an unspoken teleological assumption in cybernetics, the idea that there is some kind of metric to be optimized. The problem with most real systems is that there are multiple entities with multiple goals. The real estate crash earlier this century may have been a disaster for many, but a lot of people made a lot of money out of it, and others did extremely well picking over the carcasses.
They tell anyone involved in a startup to set up a spreadsheet of the various players and the options they hold. In theory, everyone wants the company to do really well, but different parties would benefit more than others from different buyout, shutdown or IPO conditions. A cybernetic framework would simply seek some optimal overall corporate performance, but anyone looking at such a spreadsheet would quickly realize that there are lots of optimums and that one must play one's hand wisely.
I would disagree with you characterization of cybernetics optimizing on a single "best outcome". I also have "Ross Ashby's "Design for a Brain" which is the use of cybernetics to ensure control of many different features of a brain. The key ideas are to prevent unstable conditions, the very unstable situations that can come from optimizing on any "optimum peak". I believe you can trace cybernetic ideas to as late as Rodney Brooks' subsumption architecture of his robot insects, like Atila, and may still survive in the Roomba vacuum cleaner albeit in software rather than hardware,.
That's interesting. In college I took a systems theory course that was all about managing complex systems and controlling instability, but cybernetics was never mentioned. Perhaps it is one of those seminal fields like artificial intelligence where whenever something is found useful, it gets its own name and livery.
Do you still have the reading list? There was no Norbert Weiner or Herb Simon on it?
I have Donella Meadows' "Thinking in Systems". I agree that cybernetics AI, and systems theory all seem to have gone into their own silos yet there are clear commonalities. I had relatively recently read Wieners "Cybernetics" to realize a number of his discoveries have apparently been "rediscovered" by AI practioners. I suspect that the divergences happen because of the domains people study. In biology, if someone publishes a groundbreaking technique in Science/Nature, it can create a new direction in research in that narrowe field, whilst another related field continues with a different technology and the fields diverge.
Cybernetics seems to have almost died, with the field reading like a special interest group in old technology. Computer "science" is still growing, and I suspect will encompass systems theory as a subfield of algorithms (if it hasn;y already).
Isn't it a truism that reseacchers often don't know what their neighboring lab is working on, even if there could be fertile collaboration between them, even though one reads articles about successfull collaborations that bear fruit.
The collection of references for this post is just fire.
Wasn’t it Huxley who said, “the mind is a reducing valve”?
10/10. No notes.
Dan Is the Man!!!
He taught me good ideas do not have to be sold with lies.
I gave hime "Blinding the Behemoth"
I enjoyed Desquared Digest and used dilbert dogbert as my non de plume.