Algorithmic Optimization of Production

From P2P Foundation
Jump to navigation Jump to search

History

Max Grunberg:

"The first optimisation algorithm for production planning was formulated by the mathematician Leonid Kantorovich (1960) in 1939 in the context of Soviet industrialisation, when he was approached by a Plywood Trust in search for assistance to maximise their output in order to hit their plan target. Given information about the productivity of eight peeling machines for five different types of wood and an assortment plan specifying product proportions, the optimisation problem consisted in identifying the best possible use for each machine. What has been solved prior by machinists and managers, was now calculated by Kantorovich with a novel computational optimisation technique. Central to his method of ‘resolving multipliers’ was the calculation of opportunity costs, which he later named ‘objectively determined valuations’ that indicate which combination of the expenditure of materials and labour to apply in order to achieve an optimal output. As Roy Gardner (1990: 643) points out: ‘Kantorovich’s fundamental economic insight’ was that an ‘optimal plan is inseparable from its prices’. From the early beginnings at the Plywood Trust, Kantorovich grasped the potential this technique had for the entire Soviet planning apparatus because his objectively determined valuations function similarly to money prices in allowing economic actors on the ground to compare the opportunity costs or economic consequences of different production methods in regards to the global optimality of the system. Thus, in theory, enabling a form of in natura planning by making possible precisely the cost-benefit analysis or inquiry into substitution relations that Ludwig von Mises had claimed at the advent of the socialist calculation debate to be impossible in the absence of a market.

Yet under the then-prevailing political circumstances Kantorovich was initially hesitant in circulating his ideas. This was because, following the outbreak of the Second World War and against the backdrop of the Great Purge, he feared his work might be used outside the country (Kantorovich, 1987: 256) and would therefore run the risk of implicating him in ‘counterrevolutionary tendencies’, due to the proximity of his proposal to the bourgeoise concept of opportunity cost paired with a general aversion of Stalinist planners applying mathematics to the field of economics at the time. Nevertheless convinced of its practical value for the Soviet economy, he soon began to cautiously lobby for the wider application of his ideas in optimising planning processes on a larger scale. After being turned down by Gosplan, the major state planning commission, during the Second World War he halted his campaign for the time being, in parts for his own safety (many have been deported or executed for less) but also to place his mathematical abilities at the disposal of the motherland’s war effort.

Meanwhile, on the other side of what soon became the Iron Curtain and without any knowledge of Kantorovich’s previous work, George Dantzig, a likewise gifted mathematician, was inspired by the military logistical problems of the Second World War to develop a similar optimisation method in 1947 under the umbrella of the Army Air Forces Operations Research,6 which would become the foundational method of what is today known as linear programming in the West. After William Orchard Hays wrote the first commercial-grade software for solving linear programs in 1954 while working at the RAND Corporation, Dantzig’s ‘simplex method’ became an essential part of capitalist production by offering solutions to routing and scheduling problems in production planning. Over the next decades, the adaptation of linear programming schemes and other forms of mathematical optimisation expanded into every corner of the commercial world.

Ironically, at the height of Joseph McCarthy’s red scare, certain individuals involved in developing mathematical solutions for economic problems, often coming directly out of the American Military-Industrial-Academic Complex, dared to express similar thoughts to Kantorovich in terms of expanding the application of algorithms to coordinate entire economies.

For instance, in the introduction to the proceedings of the first conference on linear programming held at the Cowles Commission for Research in Economics in 1949, the organiser Tjalling Koopmans weighed in into the socialist calculation debate by stating:


- The problem of efficient production then becomes one of finding the proper rules for combining these building blocks. […] To Mises’ arguments regarding the unmanageability of the computation problems of centralized allocation, the authors oppose the new possibilities opened by modern electronic computing equipment. (Koopmans, 1951: 6–7)


Now not only Koopman but indeed a number of socialist scientists at research institutions in the West seriously engaged with the idea of engineering the economy at the time. To some extent, the Pentagon even treated such a computerised approach for economic allocations as a threat and as a potentially viable alternative to a capitalist market economy guided by price signals.


In a classified CIA report on Leonid Kantorovich’s book The Best Use of Economic Resources (1965), which he originally published in Russian in 1959, following Stalin’s death and the commencement of ‘Destalinisation’, to popularise his ideas, the far-sighted reviewer concluded that:

- this book is exceptionally significant because of the contribution it can make to Soviet economic theory and to planning practice in so far as it is accepted. If Kantorovich’s concepts and methodologies were fully implemented there would have to be some shift in the locus of economic power – an unlikely development. The Soviet vested interests, be they party officials, managers, or planners, are not likely to relinquish control over the multitudinous economic decisions made at the intermediate layers of the Soviet economy. (CIA, 1960: 5)


For the American security apparatus, mathematical programming was regarded as an essential part of a perceived Soviet embracement of cybernetic theory and a wider scheme to develop a ‘Unified Information Net’ (Conway and Siegelman, 2005: 318), for which the agency opened ‘a special branch to study the Soviet cybernetics menace’. In 1962, President Kennedy’s top aide warned that the ‘all-out Soviet commitment to cybernetics’ would give the Soviets ‘a tremendous advantage’ and that ‘by 1970 the USSR may have a radically new production technology, involving total enterprises or complexes of industries, managed by closed-loop, feedback control employing self-teaching computers’. If the American negligence of cybernetics continued, he concluded, ‘we are finished’ (as quoted by Gerovitch, 2008: 336). However, any such concerns were unjustified, as an information network for computerising the planning process and overwriting the predominantly analogue and largely uninformed material balance planning process with a unified planning framework that utilised mathematical optimisation techniques in the end never materialised in the Soviet Union. It would remain instead only a futuristic vision or threatening backdrop to the Cold War."

(https://journals.sagepub.com/doi/full/10.1177/10245294231218868)


The Post-Soviet Debates about Algorithmic Optimization of Production Planning

Max Grunberg:

"While many planometricians like János Kornai turned their back on optimal planning after the implosion of the Soviet Union, the torch was subsequently picked up by the Western Marxists Paul Cockshott and Allin Cottrell in their influential book Towards a New Socialism published in 1993, where they developed a cybernetic planning model that supplants classical linear programming with a more efficient ‘harmony algorithm’, which laid the groundwork for the discursive resurgence of using algorithms for the economic coordination of material flows. In its simplicity, this rather mechanistic framing of the economy envisions the socialist allocation problem to be solved within a unified model consisting of a set of gigantic input-output matrices in disaggregated terms providing information about production methods in relation to a target vector, that is the plan target for final consumer goods. What is eventually optimised in the model is plan fulfilment, or in other words, the harmony of supply and demand. This is achieved mathematically through a singular objective function (i.e. the harmony algorithm), subject to a set of constraints, that minimises the deviation of the final production output from the initial plan target for final consumer goods. As an outcome of this optimisation process, one receives a detailed allocation plan for the entire economy.11 Their decision for a single objective function is built on the assumption that contemporary information technology would allow the process to run in disaggregated terms, which constitutes a crucial difference in comparison to the situation with poor aggregate statistics Soviet planners faced. In theory, this would allow for detailed allocation decisions to no longer be undertaken by bureaucrats endowed with particular interests, but by a seemingly objective algorithm. In this way, the motivation to use such algorithms is not only to increase allocative efficiency, but despite its centralist characteristics also to wrest the political power formerly held by Soviet bureaucrats and transcend the arbitrariness of their subjective decisions to a more objective process guided by formal logic.

More recently, other proposals were put forward applying the simplex method (Dapprich, 2022), machine learning (Samothrakis, 2021) or interior point algorithm (Härdin, 2021b) for their optimisation.12 Regarding their technical feasibility, such algorithmic calculations rest upon a twofold challenge. In the literature this is usually understood as the two parts of Friedrich Hayek’s ‘knowledge problem’, which can be subdivided into the issue of generating and that of processing economic data.13 While usually attributed to Hayek, the latter concern was already articulated by his colleague Lionel Robbins in 1934, who acknowledged that the socialist planning problem could be framed as a system of equations, but believed that in practice this solution is quite unworkable. It would necessitate the drawing up of millions of equations on the basis of millions of statistical tables based on many more millions of individual computations. By the time the equations were solved, the information on which they were based would have become obsolete and they would need to be calculated anew. (Robbins, 1934: 151)

Meanwhile, regarding classical linear programming schemes, this argument still holds true today, due to their rather high computational complexity. This run time of algorithms is differentiated in computer science in complexity classes according to the big O notation, with the most important classes being: constant O (1), linear O (n), log-linear O (n log n), polynomial O (nk) or exponential O (en). Given the same input size (n) the run time of these classes differs substantially. Algorithms that are within exponential time are only feasible for the tiniest data sets, but also those with polynomial run time can bring contemporary supercomputers to their limits if the input is large enough. For a long time, it was not known whether linear programming techniques such as the simplex method had an exponential complexity. Today we know that for most problems the simplex algorithm’s complexity is within polynomial time, with a complexity of about n3. It is certain that during the 70s and 80s, not enough computing power could have been mobilised in the Soviet Union to handle an input of millions of variables,14 and even with the computational means of the present, an algorithm of that complexity would be practically infeasible for calculating a plan for a national economy in disaggregated terms.

However, the recent proposals build on the insight that in dynamic economies there will be no need for an optimal solution, and approximations will likely be sufficient for the task. By sacrificing a degree of optimality for computational feasibility, algorithms like Cockshott’s harmony algorithm, which is of order O (n log n) (Cockshott, 2019b: 314), and also the interior point method used by Härdin (2021b), which allows for input sizes up to several billions of variables to be computed in a matter of hours on a single machine, render the computational part of the knowledge problem null. Computation time is further reduced by the fact that the plan does not have to be recalculated in its entirety, as operations will only have to be performed on variables that have changed between iterations, and that such a matrix would be very sparse, because the overwhelming majority of the entries in such a disaggregated input-output table will be null since only a fraction of all existing products will ever enter as an input into the production of another good.15 Distributed computing would allow for these calculations not to be made in a single computation centre but across a decentralised computer network.

Thus, it seems that the technical problem of processing the economic data appears to be tractable, although the authors of a recent Austrian critique informed by complexity economics and Hayek’s theory of mind claim otherwise, arguing that the ‘complete control of any complex system is impossible, due to the problem of self-reference’ (Moreno-Casas et al., 2022: 574–575). However, such a framing of computational complexity goes far beyond the technical issue of number crunching, which seems to be – and here even the Austrians might also agree – not the critical limitation of such an economic framework. What is at stake is not whether our technical systems can handle the data load, but how accurate economic data is generated and fed into such a model, as well as how to then lay the static map back over the dynamic territory."

(https://journals.sagepub.com/doi/full/10.1177/10245294231218868)