Designing Design

Designing Design

Image by George Faber

I was recently contacted by University of Cincinnati DAAP thesis student George Faber to get my thoughts on parametric modeling and the impact of how we design on what we design. George's thesis is titled "Designing Design" (a term borrowed from Scott Marble's Digital Workflows in Architecture). Here's the abstract:

By capitalizing on the inherent relationships that can be created between objects, parametric modeling aims to reduce the amount of time editing design elements. These tools enable designers to define the constraints around which an object is created rather than designing the object itself. The result of this thinking leads to the creation of a parametric model that embeds design logics into the construction of the digital model. Through the typology of an indoor rock climbing gym, the focus of this thesis is on imparting an understanding and sensibility to designers of how to use parametric models in a productive way. Various parametric models were used as exercises in form finding, surface panelization and rationalization, and methods to produce fabrication information for assembly purposes. This thesis is about the digital workflows necessary for parametric modeling, which in turn argues for the need for parametric thinking.

Below you'll find George asking me to respond to a series of questions he's pulled from texts that are informing his thesis. We here at Designalyze look forward to checking out George's finalized document this summer!

 

George: To begin, what particular parametric software do you use (Grasshopper, Maya, Generative Components, CATIA, Processing/C++)?

Brian: On a daily basis at Woods Bagot I work in Grasshopper and Dynamo. If we're counting scripting and programming as parametric tools (I assume we'll get more into defining what parametric means later) then I'd include Python. In my teaching at Pratt GAUD I use Grasshopper and Maya.

 

G: I know there are many definitions about what a parametric model is, or what it does, or some would even define it by how it looks.  However, for my argument, is that parametric modeling is about designing the constraints around which an object is created rather than the object itself.  Would you agree/differ? (Woodbury, Robert. Elements of Parametric Design. Oxon: Routledge, 2010.)

B: I kind of see where you're going, but I might argue that there's no real separation between designing constraints and designing an object. Constraints, or more precisely constrainable parameters, are inherent to a designed geometry - it couldn't exist without them. What parametric tools allow us to do is to make the relationship between parameters and geometry explicit in order to form model conditions. These tools also allow us to push data downstream to other aspects of a design process such as documentation or fabrication. For more on this space, definitely check out Daniel Davis's piece "Not Everything is Captured by the Fitness Function." So we might conclude that parametric modeling is intuitive and fluid creative acts in the medium of associations. But if I were you I'd just stick to what you said.

 

G: To what extent do you implore parametric models? It is a tool for form finding? Evaluating iterations? Producing fabrication information? During what phase of a project is a parametric model most beneficial? (Marble, Scott. Digital Workflows in Architecture. Basel, Switzerland: Birkhauser, 2012.)

B: Implore is a funny (but apt) word here as it implies a kind of desperation behind our desire to get the software to do something useful or meaningful. One way I use parametric models is for data-driven design, such as driving shading devices (and subsequent documentation) with environmental data or driving structural member sizing (and subsequent fabrication) with stress and deflection information. Another way I use the tools is to collect and visualize data from a model, whether it's producing specialized BIM schedules or providing a set of false-color analysis diagrams indicating solar radiance levels or panel planarity deviation in a facade system. The phase question is a bit trickier. Early design is where the most impact can be made, but sufficiently constraining the model requires a number of decisions to have already been made. It also varies from project to project. The model is useful in different ways at different times and that just has to be negotiated as intelligently as possible. One way to do this is to modularize the model - you don't always need to be running every parametric routine at every moment in the design process - the more you can separate your process into discrete steps that can be referenced or called at will and are equally easy to remove or de-activate, the better off you'll be. If you're just continually adding to one giant plate of Grasshopper spaghetti then you're going to lose control of the model.

 

G: Advocates of parametric modeling claim that the success of such models lies in the ability to define the correct relationships to allow for flexibility.  However, in a process as ambiguous as design where the solutions are often unknown, how does one define the correct constraints to create a successful model? (Davis, Daniel. Modelled on Software Engineering: Flexible Parametric Models in the Practice of Architecture. Melbourne, Victoria: School of Architecture and Design RMIT University, 2013.)

B: Experience, perhaps. If I work on two towers, I'm not necessarily going to know the best way to seed the appropriate constraints at the start of the next project. If I've worked on 100, I'm probably going to have a pretty solid idea of how to get started and what pitfalls to avoid. This is done through knowledge capture and templating - I can look through finished projects for repeated routines, encapsulate them as bespoke, sector-specific tools, and deploy them on the next project as part of a workflow-specific template file. Another way to looks at this is that parametric routines are no different from any other design craft, medium, or technique. They start as basic sketches and need to be prototyped before developing a working model, just like you might sketch a wall section before modeling it, or prototype parts for fabrication before developing a product. Shane Burger discusses this in his talk "Sketch, Prototype, Experience:"

The prototyping process increases [the model's] ability to handle complex geometries and analytical workflows. Conversely, computation starts to preserve and increase its relevance to the lifecycle of the design process through that level of integration.

But in the end every design is different (surprisingly, astoundingly different each and every time) and there's no perfect parametric theory of the universe, so we modify our assumptions accordingly and move on with our lives.

 

G: Parametrically controlled models work well for evaluating quick design iterations at little or no time/effort spent manually re-building something. However, with my own experience I sometimes find my models grow so tangled that they can no longer accommodate even the most trivial changes and I have no choice other than to start over.  Herein lies the paradox of parametric modeling, what I have created to accommodate change (and save time) is ultimately broken by change (and cost more time).  Comments? (Davis, Daniel. Modelled on Software Engineering: Flexible Parametric Models in the Practice of Architecture. Melbourne, Victoria: School of Architecture and Design RMIT University, 2013.)

B: Ian Keough, the guy who started Dynamo, has an interesting take on this which is that parametric tools give you more options on where you want to place a model's intelligence.

Through its integration with Revit, we like to say that Dynamo enables you to choose where you want to put your intelligence. For example, you might have an Adaptive Component family in Revit that has incredibly complex internal relationships that you’ve constructed and refined over many months or years. This family has a large amount of embedded intelligence. But, it has limited situational intelligence. That is, you place it next to another version of itself in a project and the two instances can’t talk to each other, and they can’t respond in any variable way to other drivers in the project. This is where you can add an additional layer of intelligence with Dynamo, using Dynamo to get parameters from one to set parameters on the other, or to set parameters on the instances based on some other value in the project. By comparison to GC or Grasshopper, you’d have to build all of this functionality in the graph, which is totally possible, albeit a bit unwieldy.

I think a common mistake is to place all of the model intelligence into a single "unwieldy" parametric definition/graph/script/whatever as opposed to thinking of it as a hierarchy or ecosystem of tactics defining an overall data-driven strategy. The aforementioned modularization as well as relying on associated apps for their relative strengths and efficiencies (for example: defining and editing a NURBS surface through Rhino modeling rather than building a cumbersome parametric definition) are means to this end.

 

G: Rick Smith, who worked as a consultant for Frank Gehry on projects such as the Guggenheim Bilbao and the Walt Disney Concert Hall frames the paradox of parametric modeling so simply; “A designer might say I want to move and twist this wall, but you did not foresee that move and there is no parameter to accommodate the change. It then unravels your parametric model. Many times you will have to start all over again”.  Smith is pointing out how parametric models used in practice are often blindsided by the very thing they purposely accommodate; change. How is this avoided in your work? (Davis, Daniel. Modelled on Software Engineering: Flexible Parametric Models in the Practice of Architecture. Melbourne, Victoria: School of Architecture and Design RMIT University, 2013.)

B: Well, I avoid it personally by not working on Frank Gehry projects haha. Another answer: I can't always avoid it but I accept that as part of the work. Yet another answer: Again, not every parameter has to be numerically encapsulated in a given definition/graph/script. I tend to think of parameters as explicitly defined (sliders, variables, spreadsheets, etc.) versus intuitively defined (sculpting polygons, editing NURBS control points, applying deformers, etc.). I deploy these parameter types relative to particular tasks. For instance, twisting or moving a wall might be best done as a "manual" modeling maneuver. If this is the case, the parameter is the reference of the geometry itself rather than the numerical parameter and vector transformation values explicitly defining the operation. This is why Grasshopper users benefit from good NURBS modeling craft in Rhino, or why Dynamo users benefit from practicing good BIM data management in Revit.

 

G: Many authors (Robert Woodbury, Rick Smith, Jane & Mark Burry, etc.) have argued that to implement parametric modeling in architectural design, a new breed of architects will need to be trained. They claim that this new generation will need to be part designer, part computer scientist and part mathematician.  If it is true that parametric models will continue to become more prominent in the field of architecture what necessary steps will need to be made in schools and professional offices alike? (Woodbury, Robert. Elements of Parametric Design. Oxon: Routledge, 2010.)

B: Maybe. Maybe we just need better software that requires less specialization. I'm not convinced that we need to be, or even realistically could, consistently churn out students with fully hybridized knowledge. I think some attempts at this, particularly within a single degree program, have given us young professionals who are half as good at two things as they could have been at one. Interdisciplinarity is important, but I favor collaboration to this end as it more accurately reflects the way practitioners work. What is critical for academia to establish, and for practice to foster, is a more computational and data-driven world-view. This way design teams can discuss potential solutions in parametric terms without the expectation of software proficiency in every individual designer. The point of an education is to establish critical views of, be conversant in, and recognize the wider ramifications of technological means and methods rather than to train students in software. Anyone can learn software.

 

G: Say for example, you have set up a parametric model that offers complete flexibility over every element in your design.  Now comes the time to make a decision, how is one solution better than another?  With the ability to adjust parameters ad nauseam when does it ever end? Should we look for the perfect solution, or given that there are so many fit solutions, should the task become setting up a system that leads to the good solution more efficiently? (Lynn, Greg. Animate Form. New York: Princeton Architectural, 1999.)

B: I like this question because it has nothing to do with software or technology. Designers have always been faced with ambiguity over how to define success relative to a particular design solution. (In fact, my particular distaste for this sort of ambiguity is partially responsible for my specialization in design computation.) In computational terms, we can evaluate designs based on fitness parameters and pick the "best" one, although this brings up some of the problems inherent to optimization routines. For example, I might optimize for the fewest planar panels within a typically manufactured size for my facade system. The "best" solution might give me x number of panels at or under this size. However, if I look at my third or fourth best solution I might discover an option that gives me significantly less panels provided I specially manufacture one or two over-sized panels which ends up being significantly cheaper and is therefore actually a "better" solution than the "best" solution. The truth is that there are multiple successful solutions - one makes sense for manufacturing efficiency, one makes sense for the developer's pockets, another makes sense for a designer's style. There are powerful tools (modeFrontier comes to mind) that allow for the weighting of a variety of scenarios relative to a single model, but approaching this from a purely algorithmic viewpoint is a bit reductive. Consider this excerpt from Daniel Davis:

One of the problems you have with optimization is that not everything is captured by the fitness function. You see this a lot in the work in the 1960s. At that time there was a lot of interest in optimizing floor plates and lots of that work failed. They were trying to work out the optimal walking distance between rooms. The algorithm failed because it couldn’t encapsulate the entire design space. They could find the optimum layout for walking but that wasn’t necessarily important in terms of architecture, or it wasn’t the only important factor in successful architecture.

I don't mean to be entirely dismissive, the tools are advantageous in that they allow us to design multiple outcomes at once, and to be fairly agile in changing values and relationships, but, by and large, decisions are still made by humans compromising in conference rooms.

 

This conversation took place on April 5th, 2015 over email.

This post was revised on May 17th, 2015 to include citations for the texts from which the questions were derived. Our apologies for this initial oversight.

tutorial rating: 
5
Average: 5 (3 votes)

By

Brian Ringley

Want to Contribute?

Want to be an author? Drop us a line here we'd love to have you.

Already have a video you'd like to post? Send us a link and we'll get you going.

:)