Programming, conflicting perspectives

April 1, 2012

Different people have different ideas on what constitutes programming. The Assembly coder will scoff at the C coder for using high level function calls, the C coder will belittle the C++ coder for using classes, the C++ coder will deride the C# coder for not managing her own memory and the C# coder will mock the VBscript coder simply because she can. In my experience a lot of programmers seem to consider those who use older or more primitive languages as ascetic sadomasochists who waste their time and effort doing things that could be automated, while at the same time ridicule those who use newer or more high-level languages for toying around with ‘fake’ programming.

My own definition of programming would only include a single constraint; execution. Anybody involved in any job that ultimately creates instructions that are executed by a computer, machine or even biological entity, can be said to be programming. This results in some interesting, and perhaps counter-intuitive, consequences. For example, the writing of pseudo-code does not count as programming, because no machine will execute the instructions. Until of course someone writes a compiler for pseudo-code, at which point it will suddenly become programming.

Every now and again I come across someone who feels that Grasshopper is not a programming language. The main reason for this seems to be the fact that one does not write textual source code when using Grasshopper, rather instructions are combined in a visual fashion using primarily the mouse rather than the keyboard.  The user ayg011 on the Grasshopper forum recently took this position with a vengeance:

“Out of experience I can tell you that the problem you have would be solved via your own skill and intelligence in a matter of seconds, rather then clumsily relying on somebody else to write some code ‘packaged’ as a component, and ultimately restricting your capacity to explore novel design solutions in an intelligent way.


Its merely a toy for students and hobbyists. If you are just starting out, invest your time more wisely and learn something which can be used over the entire life cycle of project. It’s so versatile that you essentially have unlimited potential and all operations remain fully in your control, or put in another way; you’re not restricted by the idiosyncrasies of another programmer who’s ‘components’ might be totally unsuitable for a novel idea you may want to explore therefore killling it.” [link]

His point is somewhat novel in that he doesn’t so much focus on the textual vs. visual, but instead focuses on the fact that all operations in Grasshopper come in a pre-packaged form; as ready-made components. ayg011 seems to feel this limits the designer:

“True originality comes from the freedom of expression; GC allows this, where as preconfigured ‘components’ in GH have an intrinsic limitation.” [link]

To me, this signals a profound misunderstanding of what programming is. Unless you’re Charles Babbage, it is difficult to make the claim that you are truly programming from the ground up and not being limited in any way by what other programmers have done before you. I would defend the position that there is no difference —none whatsoever— between this algorithm:

public Vector3d VectorFunc(Vector3d A, Vector3d B)
  return A + B;

And this one:


Whatever language you are programming in, you will always be initially limited by the SDK that other programmers have put together for you. If you have access to fine-grained operation (vector addition for example) you can always extend the SDK with your own functions as you see fit. Just like you can always write a novel story using a 26 character alphabet, you can always write a novel algorithm using a limited number of basic functions. This is what’s known as universality in computational theory.

ayg011 has a further objection, one with which I find myself far more aligned:

“My main concern is that the close-ended style of GH is going to trigger a generation of designers to plan their creativity around components they know exist in the software (so a tower becomes this identikit style of architecture that ‘twists’, or has a hexagonal parametric skin etc), rather then seeing it as a tool to realise whatever their imagination can think of (the beauty and strength of GC)” [link]

This is indeed a fundamental problem that faces us all. The easier it becomes to achieve something, the higher the likelihood that someone will achieve it without fully understanding it or without having an actual need to do it. Both are deplorable outcomes. It strikes me as a poor solution though to complicate the software to ensure only those with ‘true grit’ will be able to make headway.

This is a problem caused by easy-to-use software, but it is a problem that must be solved by better education. As an employee of a CAD firm, my job is to make it easier and easier for my users to construct the geometry they want. As a teacher at a university or college, it is your job to make sure that new tools are used sagely.

8 Responses to “Programming, conflicting perspectives”

  1. Hey David
    I always see the debate about programming in a slightly abstract way. That the process is less about the means but about the creation of your own tools, and your freedom to do so. As long

    Robert Aish, Autodesk
    “Never be limited by the available tools, think beyond the tool, tools should challenge the designer. The designer should challenge the tools. Become your own tool maker”

    I totally agree with the need for better education, but also understanding. In my university Grasshopper has recently started to be used by the masses, students who are uninterested in their potential to create, use found definitons off the forums. Believing that they are ‘scripting’ because of the interface and parameter control. But what is worse, most tutors have no idea the difference between students who designed/created and those who downloaded. At the moment it is fairly easy from my point of view to spot those abusing and misusing definitions, but i worry that in the future with a growing definition library online, it will be harder to spot.

    I sometimes question whether i should ‘snitch’ on some students who blatantly pass off downloaded scripts as their own. While unknowing tutors bestow praise. In some cases I believe its the lack of education, students do not realise that downloading an entire definition for say, your facade, is the same as clicking a predifined operation in other programs. While others knowingly decieve tutors with claims of definition authorship.

    I suppose its the dark side of the open source community. Universities need to hire programmers to call ‘bull-shit’ on students who are cheating their way to better degree.


  2. well – this is quite a fascinating topic – i’m trying actually to do a phd partly based on it (if things work out).

    There has been a lot of actual study in terms of what digital tools provide to the designer – geometric freedom, optimization algorithms, environmental analysis etc. but little study on the tool-designer relationship.

    I think the best way to describe it is as an “auto-catalytic” loop of innovation; designers are informed by and inform the tools they are using, and obviously the tools are informed by and inform the designer (and the designer’s output). The interesting part is when the tool designer and the designer become one and the same person – and this, at least in architecture, is the direction I would like to believe things are evolving (but will they actually reach there?).

    As for abuse and mis-use – that is inevitable. How do you describe the “sagely” part in David’s last sentence, except through heuristic terms? Obviously academia or the actors judging the “output” need to be sufficiently educated themselves to evaluate in a proper way, and weed out superficial applications of computational tools.This will keep the system “healthy” and continuously challenged – keeping it from stagnation.

    My biggest worry (and it;s not really that big, actually) is that this auto-catalytic loop of innovation between tool and designer can stagnate or break in the “forthcoming” digital realm. For example, look at the notational paradigm introduced by Alberti – it still lasts today, though it is slowly replaced and challenged by digital tools and techniques (mass customization, digital fabrication, etc.). It proved to be a sufficiently flexible language for architecture to allow for the emergence of everything you can imagine from classical architecture to deconstructivism and more, though it started out just as way of faithfully reproducing drawings with medieval techniques. I think I’ll stop rambling.

    Whether the “grasshopperian” programming paradigm will evolve in such a way to allow for such flexibility and richness – in other words, emergence – and become a true “domain specific language” for architecture and design – we’ll just have to wait (participate!) and see.

  3. heumanndesigntech Says:

    Thanks for the post David; this is a fascinating topic that is relevant to every designer using computation today. I also greatly appreciate Dimitrie’s insights about the tool-designer relationship.

    Never before in the history of design have the tools we use for conception and notation evolved so quickly. New algorithms and techniques emerge constantly, many of which have very distinct visual characteristics. When new techniques become accessible to a wide base, they are bound to manifest as trends in design projects. Voronoi is the most notorious example, about which Dimitrie has written, but there are plenty of others. (I made this image: http://bit.ly/yFZyiv to point out some examples).

    I think this also happens in software packages that are not so much about programming, like Photoshop. Whenever a new filter or effect is added to Photoshop, it has a period during which designers test it out, play with it, examine its potentials, and deploy it in real projects. The staying power of these automated effects is usually inversely proportional to the degree to which they are visually distinct or recognizable in the finished project. There was a while when every last button on the internet was shiny and blue with a fat drop shadow made with a default Photoshop layer style, and thankfully those days are over. On the other hand, no one is going to stop using Gaussian blur or photographic sharpening algorithms; they are useful algorithms that can be deployed in service of a huge variety of aims.

    It seems to me that the imagined lazy designer who just presses buttons/runs filters/connects pre-made components without being intentional or critical of the results is something of a straw man. Sure, this may happen a bit in school environments, but usually it’s a product of inexperience and the learning process than of laziness. This is really no different than the architecture student who borrows a BIG form, or a Zaha Hadid surface treatment, or a Miesian window detail; she is learning to manipulate the materials of architecture, and rather than reinventing the wheel, she is wise to rely on somewhat “automatic” decision processes by borrowing from other designers.

    • Daniel Davis Says:

      Your drawing Andrew reminds me of a quote from William Mitchell: “Architects tend to draw what they can build and build what they can draw.” Mitchell was talking about physically building but today it would apply to building in Grasshopper.

      I agree the ‘lazy designer’ is a straw man. I know how to program but I often use Grasshopper because (for some tasks) creating a definition is significantly faster than writing code. Architecture should not be judged on how complex your definition is, or whether your scripted your components yourself, or whehter you used Grasshopper instead of C++. Architecture should be evaluated as architecture.

  4. Great post, David – and not only is there no basis in computer science for ayg011’s argument that nodes in which codes/algorithms are bundled “[restrict] your capacity to explore novel design solutions in an intelligent way”; it makes absolutely no sense with regard to language generally, or music…

    Heuristics are generally dashed, if you follow this train of thought. Are all those eponymous formulas and physical laws mathematicians and “natural scientists”/physicists have used as basis for their own work for ages somehow worth less? And are algorithms – on which researchers, scientists and even programming artists rely, “clumsily” or not – suspect, or crutches? (Not to mention the large APIs/libraries bundling vital I/O, software-hardware interface and media functions used by artists/designers working with e.g. Processing and Open Frameworks – are the best in the business lazy and uninspired, then, if they use these tools in their work?)

  5. Few days ago, a teacher in my school didn’t understand why a guy was trying code a complex geometry algorithm; he said: when you become an architect you are going to contract a mathematician to do that work, please, don’t spend your time with that…

    This teacher is one of who has introduced fabbing culture at school, but dosn’t understand why is important to build your tools by your own.

    We need a better coding culture (not in quantity but in quality) at schools, youngest teachers with real understanding in the matter.

  6. it is on the tutors and graders to be able to tell superficial models from godlike models, but any grasshopper master can easily see the difference. an easy way to test the student is to ask him how the process is done. if he stumbles then obviously he did not do his homework

  7. ayg011 Says:

    It’s taken me a while to respond but hopefully I can still add a worthwhile comment to this post. I do agree with a few of your points, particularly the varying levels of programming skill needed for different styles / methods of programming languages.

    However there are a one or two points I disagree with and which fail to fully address some of the criticisms I have made. The example you have used to argue that there is no difference between GH ‘programming’ and any other language is poor as it conveniently ignores all of the key concepts programming offers which GH is unable to handle if only using the pre-configured ‘components’.

    There is a limit, particilarly when it comes to conditional statements, loops, and efficient data structure (the list structure in GH know as ‘trees’ I find to be highly esoteric, stifling and worst of all a non-transferable skill [as opposed to learning about lists in C and being able to transfer this knowledge to many other programming languages]). Once this limit is reached, conventional programming methods have to be used (c++ or python component etc), which proves there is a difference and you can’t truly call GH ‘programming’. 

    At this point, for the reasons above, I stipulate that due to the convoluted methods of creating a definition, along with the plethora of different components that will undoubtedly require a lengthly and protracted training programme for new users, that GH will forever remain a niche; the betamax of the parametric software world.

    However I do acknowledge that the way a GH definition may be organised will still need the same structure and logic as the equivalent thing written in pseudo-code, and does serve as a useful platform to be able to advance onto more sophisticated methods of design realisation.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

%d bloggers like this: