Tim Abrahams


Architecture and Design



Computers in Theory and Practice

Architectural Review April 2013

In 1967 the Centre for Land Use and Built Form Studies (LUBFS) was founded in the Architecture department of Cambridge University by Lionel March who had arrived in Cambridge to study mathematics with a recommendation from the computer scientist and codebreaker Alan Turing. Under the encouragement of Leslie Martin, March moved to a joint degree in mathematics and architecture; an almost unprecedented combination then and sadly now. 

The significance of this moment is only now being understood. In his book Semi-Automatic: Motivating Architecture After Modernism, Sean Keller has suggested that this moment was part of a wider process to assert the significance and usefulness of architecture in a changing technological world.

What’s interesting about these four is that they knew what they wanted before they started.
This process, which was part of a wider move in British education, included the assimilation of architecture schools into universities in the United Kingdom and the introduction of 'A' Level requirements for the study of the subject. Architecture beefed itself up by engaging with mathematics. As the postwar welfare state was built, Keller suggests that Martin in particular saw the need to prove that architecture was as rigorous in its technical requirements as engineering and planning. Under Martin’s encouragement, March explored how advances in mathematics such as computing and artificial intelligence could inform the work of the architect. He pursued a range of study areas, similar areas of interest to the New Brutalists, such as using graph theory to test new arrangements of rooms in a house; modelling pedestrian flows; generating new unexpected adjacencies in domestic layouts: and making maps of population distribution in urban areas drawn through probability from small samples.

In an introduction to an issue of Architectural Design from May 1975 dedicated to LUBFS work, March wrote: ‘We do not consider computer graphics to be very important: although the facilities are at our command should we need them. Making computers do what architects do and think today is like using a steel frame to support a mock gothic building. Fundamentally computers will not change design methods but theory will.’ Although the issue is packed full of formulae and diagrams transcribed from computer analysis (rather than plans and sections), March is keen to emphasise the role of the architect while playing down the importance of aesthetics. It is not the look of computer-designed buildings that is important, he insists. Paramount is the manner in which the computer’s aptitude for analysing spatial adjacencies and flows of traffic between them informs architecture.

The modest premise of the Archaeology of the Digital exhibition at the Canadian Centre for Architecture is to analyse the way in which computers were incorporated into the design process of four projects of a later gestation; but the wider ambition is to show how this process changed both computing and design for ever. A primary impetus for the show was the practical need to preserve digitally native architecture for a wider global audience; to extract the information created in long-forgotten software packages on hard drives; to store it; and finally to think about how to display it. However, this very technical process of excavation also occurs at a moment when there is a wider cultural need to accept the reality of digital design: to understand it as a cultural reality rather than as an empty promise of utopia which can easily be turned into a straw dog by detractors.

According to Greg Lynn, the curator, ‘Now is the time when we can stop having discussions about digital technology that start with “in the future”… What this show is about is saying, “In the past, digital technology did this”.’ Of course, a great deal happened in the evolution of architecture and computing between March’s time and the point at which architects actually used computers regularly for design work (the CCA’s period of study). In March’s time, it was not seriously considered that computers would ever be in the hands of the average architect. They were expensive structures built for use by the military and big science and architectural researchers were lucky to get time with them. The early British contribution to the theory of computing in architecture is outside the remit of Archaeology of the Digital. It was not until the mid 1980s, at the very earliest, that we began to see architects work with computers and that is the first reason why the CCA has lighted upon the apparently disparate work of Peter Eisenman, Frank Gehry, Shoei Yoh and Chuck Hoberman.

By this time, the theoretical implications of computing first prompted by March and Martin − but spreading quickly to the United States in terms of planning and building form − had in fact quietly made an impact on the way architecture was drawn and conceived. The rise in use of the typological diagram from the mid-’70s onwards, I would argue, is evidence of this change of perspective.The work of Peter Eisenman, included in the exhibition, is another example. Eisenman, of course, studied at Cambridge before the establishment of LUBFS, but under the directorship of Martin. Later in life, he argued (in AD in September 1992) that the digital had prompted a rethinking of perspective. Brunelleschi’s invention of one-point perspective corresponds to a time, he wrote, ‘when there was a paradigm shift from the theological and theocentric to the anthropomorphic and anthropocentric views of the world’. Quoting Deleuze he argues that the totalising nature of this latter view should be questioned by ‘the fold’. This Deleuzian fold allows the architect to escape the totalising nature of his planimetric view. Although Eisenman was quiet on the moral and social reasons as to why architects should adopt this perspective, the article is an expression of how the digital has informed the way we view the world.

It is Lynn’s proposition that these four pioneers of computing in architecture went to the computer with specific needs and in doing so created four very clear strands in the evolution of architecture. In Eisenman’s case, it was the need to take a device − namely the structure of DNA − and create a diagram for a building from it. The simple idea that Deconstructivism was an extrapolation in architecture of Cubism is not the full story. Eisenman was interested in applying an idea of language by creating a project in which buildings’ scientist-users could read the DNA of the building.

Working at Ohio State University he approached the computer department to produce iterations of the DNA sequence to create a diagrammatic plan of the building. The abstracted diagrams were sent by FedEx (in the year that FedEx launched) from Ohio State to New York, annotated by Eisenman and redrawn in his office, and then returned to Ohio State for further iterations to be generated. Computing aided Eisenman’s search for a deep structure. His use of the computer as a technique is a strategy analogous to the process by which DNA sequencing occurs − a process that was initially chemical but by the mid-1980s had become semi-automated.

The importance of the CCA’s archaeological approach is revealed in this project. Eisenman insists that his approach was intuitive, pragmatic. In an interview with Greg Lynn, he said ‘I was teaching at Ohio State, I had this job to do at Ohio State. Chris Yessios was teaching computers. I didn’t know what that was. And I am not sure he knew what it was, either, by the way. And I said, “Hey, Chris, you could really help us. We don’t have a way to model the things that we want to do. And if you are interested in 3-dimensional modelling, which is what you say you’re interested in, develop something that we can use.”’

From this dialogue, Yessios would go on to set up FormZ, which is not only now used in 3D computer modelling for architecture but also in Hollywood for animation purposes. 

It also changed the nature of architectural representation in terms of typology. In hindsight Eisenman sees his early use of computers, and in particular the competition entry for the Biozentrum (the Biology Centre for the JW Goethe University, Frankfurt, which was designed in 1987) as part of the evolution of a core principle of his architecture. It was a means for him to develop a particular kind of representation.

‘[Colin Rowe] wasn’t interested in the diagram. He was into the parti, and the parti, of course, is compositional. I was interested in something other than the parti, which is the diagram, but the morphological diagram as opposed to the typological. Of course, Rem [Koolhaas] is interested in the typological diagram. I’m interested in the morphological one; the one that generates form. And I think this is where Colin and I split over parti versus diagram,’ he says, referring to the period when the Biozentrum was developed. Through dialogue with the computer a key organisational principle which can be repeated and adapted emerged.

Gehry’s use of computers is famously more extreme. Here is the copyright holder of a proprietary software that is used throughout architecture who describes his one direct experience of designing with a computer − the initial stages of the Lewis House − as ‘excruciating’ and like ‘being trapped in a vice of pain’. Yet the Lewis House’s long evolution from 1985 to 1995 began with physical models, which were then digitised. The digital files were then worked on and 3D prints, deposition models and solid laminate models made from them. Gehry took a soft felt material and hardened it with wax, based on some of those models. He then scanned and tried to rebuild them. As Lynn describes it, ‘it’s a very rich exchange between physical model-building, drawing and digital modelling’. 

It is Lynn’s contention that the four designers included in the exhibition go to computers to find a solution to a problem. Gehry, self-effacing in a similar way to Eisenman, insists that he went to it to solve construction problems. Having drawn a previous project using descriptive geometry, Gehry was disappointed to find inconsistencies in the final build. He approached his employee Jim Glymph to solve the problem. Glymph went to the software CATIA which was being used in the aircraft industry and Gehry’s team worked in-house to give it a graphic interface that could be used in architecture.

This proved its worth with the Barcelona Fish, compared with AutoCAD which was being used by the executive architect. Gehry compares his work to that of artists, such as his friend Richard Serra, who were seeking a way of making sculptural work at an urban scale, resisting the urge for expressive work to be reduced, as he puts it, to being the ‘poop in the plaza’. Gehry suggests that the software merely gives him the opportunity to sketch and model as he would wish, then scale up. And yet as the Barcelona Fish makes clear, there is a continuum between the structural advantages of CATIA which could be easily used with steel production, and the lustrous finish that could be achieved again in metal. It is nigh on impossible to say what comes first: the lustre in the graphic interface of the digital surface or the metal surfaces that would invariably clad the steel substructure. Lynn describes seeing the Lewis House for the first time and seeing it as ‘native computer’.

If the historical association between computing and architecture (which began in Cambridge and later moved to the US) was an attempt to assert the scientific nature of architecture, it is ironic, then, that Gehry operating in a counter-direction − drawing architecture to art − should become one of the most successful users of computers in architecture. He describes how Serra looked at a typological model in his office for a tower block in Hanover and asked how it was going to be made, subsequently using the technology and staff that Gehry was using to create similar torqued pieces. Today Gehry sees the artists gaining the upper hand. ‘What’s happened is architecture has backed off the sculptural edifice from some kind of embarrassment,’ he says. However, he insists: ‘there’s a need for it, and so cities are hiring the artists to do it now’, and points to the ArcelorMittal Orbit as a case in point. (The unapologetic way in which Gehry sees Anish Kapoor addressing a ‘need’ − and what that ‘need’ actually is on a political level − partially explains the critical outpouring against the project.)

In this magazine in February 2012, Patrik Schumacher stated that architecture ‘should be sharply demarcated against other competencies like art, science/engineering and politics’. It was the last in the list that earned him the opprobrium of other commentators, showing that we live in an age when architectural debate is almost exclusively focused on working out the discipline’s relationship to politics; to such an extent that we are blind to its relationship to anything else. Of course, it is only in the UK that both traditionalists and left-wing critics started reaching for the bricks when they heard the word Parametricism (not to throw them of course but just to reassure themselves of their rectilinear form).

Lynn believes that Eisenman is the inventor of the parametric approach to the computer − much, it must be said, to the bewilderment of Eisenman. However it is the Japanese architect Shoei Yoh who Europeans would more readily recognise in this role. Lynn’s point about Eisenman is that he uses, repeats and adapts an architectural idea, different from the structural repeat and adapt that you would associate with the work of European practitioners like Kas Oosterhuis. Architects like him have explored the possibility of computer technology to create established relationships in a computer model with similar members. The model can then be adapted, and following the determined relationships, individual unique members can be specified and then sent to manufacture as unique parts with their designated place in assembly. 

Yoh championed this while working in relative isolation in Japan, developing it with a German manufacturer, although he was later invited to Columbia when Bernard Tschumi was developing his paperless studio projects. Yoh initially began on wooden structures but moved into steel for his roof structures on Odawara (1991) and Galaxy Toyama (1992), creating three-dimensional trusses of varying depths, with different sizes of steel tubes. Tellingly he studied economics before architecture. His argument would be familiar to anyone studying at the Architectural Association five years ago − it’s a rethinking of Fordist mass production and the introduction of the computer which ‘helps making all different kinds of shapes, length, angles, so the production cost does not exceed our budget’. Except Yoh was developing this in rural Japan in the ’80s.

Yoh’s early use of computer technology means the work he produces is akin to the punch-hole schematics used by Lionel March. They are some of the first computer models that exist not simply to create form but to test a project to be built. In the computer test model, the manner in which the points of stress were identified developed its own aesthetic. It is an adaptation of Frei Otto’s experiment with soap bubbles and the way it pointed to key organisational relationships in structure (although Otto was also making a point about urban planning). The representation of these stresses in computer modelling shows the curved forms that digital architecture was most closely associated with, certainly at the turn of the century.

Chuck Hoberman was interested too in using software that could generate small parts with tiny variations; yet Hoberman’s importance derives from his need to make software which could design moving parts and which could therefore represent an animated structure. In the 1980s, Hoberman was using AutoLISP, which is an expression editor in AutoCAD, to look at how moving parts would interact with another moving part. At that time aerospace software like CATIA would have cost $50,000 for a licensed platform. Hoberman was using AutoCAD to invent a way of looking at moving parts without anyone showing him how to. Far more than the other architects, the still images of his work do not reflect the way in which he designed. 

Hoberman’s story shows the proximity of architecture, through computing, to the grand technological narratives of modern American technology. His architectural pieces emerged in part from working with NASA on their ‘deployable structures programme’. ‘Rather than constructing a structure in space, you would unfold a structure in space,’ says Hoberman. He worked in the same manner as a Silicon Valley start-up with his partner 
Bill Record who had a company called Zengineering where he refurbished digital machining centres. Record would program in G-code, a programming language used in automation to make these parts. Hoberman would create a front-end software so that his designs could flow seamlessly into this machine.

From this backwoods coding emerged the dynamic software that Hollywood took to, and it created a huge driver to the development of the software used today. Hoberman’s attitude captures the ambivalence of the profession: ‘on the one hand, I love that capacity to create visions and see them do all kinds of imaginative things and it revolutionises our perception of what’s possible. On the other hand, I was on the side, saying: “Yes, but I do special effects in real life: they have to work”.’ Architecture − and Hoberman’s designs particularly − represents an important staging point in the history of technology throughout the world. Hoberman’s models and drawings stand at a very clear juncture between big science and big art.

In his books, The Net Delusion and To Save Everything, Click Here, Evgeny Morozov has launched an attack on what he calls ‘solutionism’ − the idea that given the right code, technology can solve all of mankind’s problems. Taking aim at the rhetoric of Eric Schmidt, former CEO of Google, in particular, he argues that this drive to make government ‘open’ actually conceals an emphasis on limited technocratic issues rather than wider democratic ones. For their part, the tech giants point out the very real improvements they can make to people’s lives, in accessing information. The debate notably does not take place around the historical achievements or shortcomings of information technology, but around potentials. Can smartphones encourage people to vote? Should they? These speculative, rather arbitrary questions characterise the current debate. 

Part of the impetus for the CCA’s work in computing is to ground this argument in regard to architecture. It is worth bearing in mind that two of the projects in the current CCA exhibition played key roles in the formation of software tools that architects use today. To understand the role that architects with divergent needs and requirements played in the formation of the tools we now use, it is vital to understand that they went to computing with ideas, not yet fully formed, about what they would be able to achieve with them. As Lynn remarks: ‘what’s interesting about these four is that they knew what they wanted before they started, and they went to people with an intent and used the software as best they could to achieve their intent’. The drawings they produce offer extraordinary unexpected answers.