The “Source” in Open Source

The growing use of the term “Open Source” has attributed to it an almost protean meaning, tailored according to the goals and preoccupations of its user(s). The common denominator in all these cases is an interesting tension between literal and metaphoric uses of the term. On the one hand Open Source denotes a series of principles and practices, which were conceived within the context of software production and distribution and can only seen accurately in relation with the history and particularities of the field. On the other hand, the same term can be (and is) used to express a broader attitude towards the design, distribution and use of immaterial and material products.
The spread of Open Source culture beyond the realms software, gives rise to an ever-growing number of translations and appropriations of the term in other fields of human activity, which utilize it as a conceptual tool to rethink their own practices.

In my previous post entitled “Free and open: on, in and in-between” I argued for the importance of developing critical frameworks addressing this migration of the term to a multitude of other fields. Through this argument I do not intend to reject the creative potential of this translational looseness. I believe, however, that this process can only benefit from being conscious of the pitfalls of the trans-disciplinary translation of the term “Open Source”.
When it comes to identifying the dangers of the translation and appropriation of the term one can first refer to the abstract use of “Open Source”, as denoting an intention of a more “democratic design” with little reference to the practices which make it possible. This approach renders the history, internal tensions and controversies of Open Source invisible, and often leads to the reductionist assumption that “democratizing” is synonymous to “open sourcing”. A second danger is the direct translation of the term to other fields, without first taking the time to reflect on its translatability. The definition of Open Source in other areas, and especially when one leaves the realm of the immaterial to talk about the material world becomes a complex task and requires careful consideration of the particularities of this transfer.

In one of my previous posts I commented on the ambiguity of the closest one has today to an Open Source Architecture definition. Carlo Ratti’s comment that “Open source architecture draws from references as diverse as open-source culture, avant-garde architectural theory, science fiction, language theory, and others” is indicative on the one hand of the need for a disambiguation of the term so that it goes beyond being just a discursive medium. On the other hand, this observation is very telling of the signification of the term “Open Source” within the architectural imaginary.
When it comes to Architecture the user control of the design tools and decisions challenges the fundamental assumptions on the structures of the discipline. The assertion of the design of space as an open, collaborative project, along with the vision of the unmediated expression of the individual’s needs and desires, resonates with the much broader discussion on space and power, from foucaultian accounts asserting architecture as the ordering of bodies in space, to phenomenological discussions on space, perception and self. The precedence the architectural visions of the 1960s and 1970s, rich with visions of self-planning and self-design, loads “Open Source Architecture” with a series of unrealized utopias.

In my attempt to provide potential schemes for the disambiguation of this term within architectura discourse, I previously discussed the terms “open”, “open source” and “free”, within the context in which they emerged. I attempted to expose their ideological and practical disparities in order to better map the space in which other fields seek their references. In this post I will focus on the “Source (code)” part of the term.
According to the Linux Information Project,
Source code (also referred to as source or code) is the version of software as it is originally written (i.e., typed into a computer) by a human in plain text (i.e., human readable alphanumeric characters).
This definition can be broadened to include non textual representations: “‘Source code’ is taken to mean any fully executable description of a software system. It is therefore so construed as to include machine code, very high level languages and executable graphical representations of systems” [1]

The Open Source definition starts with the declaration that in order for a program to be characterized as “Open Source” then its source code does not only need to be accessible but it has to comply to certain criteria. When it comes to “Source Code”, the second thematic in the list of these criteria, the following requirements must be met:
The program must include source code, and must allow distribution in source code as well as compiled form […] The source code must be the preferred form in which a programmer would modify the program. Deliberately obfuscated source code is not allowed. Intermediate forms such as the output of a preprocessor or translator are not allowed.
The rationale behind these requirements is that the evolution of a program requires its modification, which in turn is contingent upon an easy source code.“Since our purpose is to make evolution easy, we require that modification be made easy.

The idea of an un- obfuscated code is here discussed in terms of efficiency and productivity, which brings to mind Richard Stallman’s criticism on “open source missing the point of free software”, which is about offering users control over the technologies that they are using through the freedom to modify them according to their needs and desires. Although, as I have previously argued, these differences of principle should be taken into account when discussing Open Source, I would like to leave them aside for a moment and dwell more on the implicit assumptions of the “Source Code” section of the Open Source Software definition.

First, an assumption contained in the term “Source Code” itself is a direct link between the source code and the outcome of its execution. In other words, that the source code contains all the information necessary so as to produce identical copies of a product, in an unambiguous manner. Within this context, the access to the source code offers full mastery and control of the product (software) itself.
Second, that the writer of the code should ensure that this direct link from code to product is easily legible and that the only requirement for appropriating and re-authoring it, is for the new author to “speak” the language in which the program was written. This creates an entire ethic around authorship and the distribution of knowledge. Extensive commenting and documentation are common practices which ensure the transparency of the source code and anticipate its future appropriations and modifications by other users.
This leads to the third and perhaps most important observation, which puts the weight on the user of the code, who is also expected to modify it. If one was to introduce Richard Stallman’s vision of “freedom” in the equation, then a necessary requirement would be not just to make the code as clear as possible, but to also use a language which makes it accessible to as large audiences as possible. There is a growing number of community developed projects, such as the Processing, Scratch etc, which undertake the challenge to produce low floor powerful development environments, functioning both as entry points to programming and as spaces for creation, sharing and distribution of products.
The post “Design for Empowerment for Design: environments, subjects and toolkits” discussed the historical perspective of strategies and programs allowing non experts to become their own architects; from Yona Friedman’s charminly naivist pictograms describing design and construction processes, to Frazer’s design toolkits and to Nicholas Negroponte’s Design Amplifiers, operating as self reflexive learning machines offering users a trip to “Designland”.
A common characteristic of all these proposals was the shared assumption that a more intuitive interface, allowing users to visualize their desires was a necessary but not sufficient condition for true unmediated design participation, which seems to be the vision of current conceptions of Open Architecture. All of these projects incorporated the idea that through this modeling the users would be educated in the non trivial task of expressing their desires with spatial decisions. Given the growing accessibility of design software accompanied by a growing software literacy, which allows users to experiment with 3d visualization software (eg. Sketchup) the revisiting of these considerations become a very productive field of inquiry.

Going back to the main discussion, the mapping of these fundamental assumptions, contained in the original definition of “Open Source”, raises the question of if and under which term they can be translatable in architecture.
Defining what would be the “Source Code” itself when it comes to Architecture is a hard problem in its own right. Within the context of this post I will refer to the scale of the building, adopting the perhaps simplistic but rather natural hypothesis that the source code of the building is its representation, its models and drawings. This hypothesis is shared amongst architectural practices, such as the Open Architecture Network, where the free distribution and modification of computer drawings and models is a fundamental principle of operation. In my next post I will discuss the idea of Open Source Code in the scale of the city.
As I previously mentioned, when it comes to software there is a linear procession from source code (information), to the compiler (mediator) and the final outcome (software product). Within this general context of free analogizing with architecture, this scheme would be translated as a procession from some sort of encoding of building information (drawings or models) to the mediator, who is the contractor and the builder and the final outcome (the building). This analogy reveals an inherent tension between its parts.
In the case of software the access of the source code guarantees access to the final product. Nothing unpredictable is expected to happen during the interpretation or compiling process. On the opposite when it comes to the production of buildings, every step of this procedure is vested with ambiguity.

In his essay “Mapping the unmappable” [2] Stan Allen discusses the notational nature of drawings, characterizing them as “abstract machines” operating by means of transposition rather than translation. Not unlike a musical composition, the score (drawing) offers instructions on how the piece will be performed but is unable to determine the outcome, as this is always dependent on the players themselves.
The counterargument in the abstractness and notational nature of architectural representation, which traditionally characterized drawing is that the growing digitization of architecture, both in the way it is designed and fabricated, takes away a large part of this ambiguity.

Building Information Modeling (BIM), currently gaining ground in the realms of architectural practice, provides the opportunity of concentrating and managing building data in one parametric, hierarchical model. This model contains simultaneously information about the spatial and geometric attributes of the building, as well as specifications about its building components including cost analysis, parts ordering etc. At the same time, it allows for the collapse of all the different systems of the building (electromechanical, structural) in the same representation. This abundance of information often invites the assumption that having the BIM model is like having the building itself. This transition from notational, reductive architectural drawings to a virtual representation of the building, offering from assembly instructions to lifecycle management data, seemingly takes away a large part of the ambiguity and makes the vision of shareable building information and the streamlining between design and construction appear more realizable.

However, the assumption that more information increases the constructibility of a design is not left unchallenged. The focus of these critiques is placed on the builders and contractors and argues that any human mediation in the process of materializing building information is in essence an act of interpretation and reconstruction.
An example of such discourses, is Joshua Lobel’s thesis “Building Information: Means and methods of communication in design and construction” [3], at the MIT Department of Architecture. Through a series of field studies in the professional world and analysis of informational models, Lobel argues that that the demand for effective communication between the architects and the contractors, which is crucial for the constructability of a design, requires a different mental model than the standards-based approaches adopted in the development and use of current computer aided design tools.
He demonstrates that the perceived complexity of a design is a measure of the difficulty in the translation of the design information into construction information, strongly related not to the quantity but to the interpretation of this information. The current standardized approaches to design communication which share the intention of the disambiguation of information through a fixed data model, can result in acts of wasteful repetition in design, in the loss of non-standardizable expert knowledge and in the rigidification and denaturation of meaningful acts of communication incorporated in the design process.
Unless one imagines the deployment of full scale 3d printers, reproducing the three dimensional specification of the building in the physical world, the process of going from building information to building cannot follow the linear fashion in which a-contextual and a-metaphoric software algebras are interpreted and compiled.
The crucial question which is raised here is how can one encode and share what comes after the BIM model; the builder’s solutions invented on the fly, the local conditions and building habits, the meta-design of the building by its users. When it comes to Open Source Architecture the “sharing” and “distribution” of the building information is in essence always elliptic and resorts to the level of a design solution, a notation again, no matter how elaborate it is, interpreted according to the particularities of its locus of implementation.

This is not to claim that “Open Source Architecture” is a futile goal, but to point out the necessity of a different mental model when it comes to specifying how information is distributed and accessed. Having excluded the possibility of creating a reproducible form of the artifact itself the question comes down to what is the essence, the “source” of a building. Stan Allen, using Nelson Goodman’s distinction between autographic and allographic arts, offers a suggestive view in this area. Using again the analogy between architecture and music he claims that “The guarantee of authenticity is not the contact with the original author but the internal structure of the work as set down in the score
This suggests an approach in which the essence of an architectural solution is abstracted from the information model or the outcome and is traced in those elements which allow for its re-authoring and “performance” under different conditions. This idea, which was the basic operational mode of vernacular architecture (recipes rather than specifications for buildings) also brings to mind Christopher Alexander’s visions, finding their current implementation through the practices and goals of Peer 2 Peer Urbanism.
In his 1977 seminal book “A Pattern Language: Towns, Buildings, Construction” [4] Alexander created an architectural language through 253 patterns which correlate problems and solutions. His objective is summarized in the sentence “at the core… is the idea that people should design for themselves their own houses, streets and communities. This idea… comes simply from the observation that most of the wonderful places of the world were not made by architects but by the people”, which has interesting conceptual affinities to Richard Stallman’s discourse.
Christopher Alexander’s pattern language, coupled with my recent interview of Nikos Salingaros, founder of P2P Urbanism calls for extensive analysis requires at a minimum a separate post. Within the context of this discussion, what is perhaps the most salient idea is the principle of a structural rather than ontological analysis of an architectural solution as a reinterpretation of what could be a useful architectural “source code”. In my next post, I will take a step back and examine the notion of code in the scale of the city, where the idea of Open Source acquires much broader social and political connotations.


[1] Hartman,M. 2010. “Why Source Analysis and Manipulation will always be important” in Source Code Analysis and Manipulation (SCAM), 2010 10th IEEE Working Conference
[2] Allen, S. 2000. “Mapping the Unmappable: On Notation” in Stan Allen and Diana Agrest. Practice: Architecture, Technique and Representation. London: Routledge.
[3] Lobel, Joshua M. 2008. “Building information : means and methods of communication in design and construction”. SMArchS Thesis . MIT Department of Architecture
[4] Alexander, Ch. 1977. “A pattern language : towns, buildings, construction” / Christopher Alexander, Sara Ishikawa, Murray Silverstein, with Max Jacobson, Ingrid Fiksdahl-King, Shlomo Angel. New York : Oxford University Press



  1. Andrew Stone

    It is interesting to see FOSS philosophy applied to architecture. Great post! Wrt the HIM concept I wonder if it fails due to over specification. Instead try to reconceive it as a minimally specified design. That is, spec exactly and only what is needed and also specify what is left to the builder’s interpretation. Also the BIM should never be complete. It should capture all dialogs between builder and architect every time the work is realized. And also capture interpretation notes by the builder. All become authors. This collaboration is the fundamental power of open source!

  2. Theodora

    Thanks for your comment Andrew! To follow up, I believe that when it comes to thinking about the principles of open source in architecture a question worth asking is “what do we share”? Do we share a specification of the architectural solution itself, in the best way we can capture it, or do we extract and abstract the links between goals, techniques and design choices so as to develop some sort of collective design intelligence? I think that although having a digital model which you can start from and modify is necessary, this model might not be of much use unless the second (development of design intelligence) comes into play.

    • Andrew Stone

      I think we are saying the same thing; only you are saying it more cleverly :-). The FOSS answer to “what do we share?” is simply to share everything. Let the community decide what’s the most important via tools like search. And I think that’s what you are saying too with words like “extract the links between goals techniques and design choices”. Maybe back in the 80s “Open Source” meant only that a finished document is freely available. But today it means that the document history, design tradeoffs, dialogs between major contributors, and all problems discovered are available (generally via community sites, mailing lists and revision control systems). And also this same data is there for remixes of parts of the document into other projects — which is essentially an extraction of (using some of your words) “a link between a goal and technique” and its insertion into another project. As a SW engineer, I think that someone ought to be able to seamlessly incorporate this extra material into the (mostly graphical) architectural specification through some kind of web integration with popular CAD programs.

      I would guess that the biggest problem with open architecture is implementation cost. I mean even if the St. Louis arch was open architecture I doubt anyone would build another one. So open architecture is unlikely to be valuable for our most celebrated “monumental” architecture (except for cost reduction of hidden stuff like wiring, ductwork, bathrooms). However, for more mundane work, like home design, it could be transformative. Both because there will be enough implementations to have iterative improvement and because it could allow more freeform customization (especially with new CNC technologies emerging) of mass produced elements, giving a custom home at a price point nearer to that of a modular.

  3. Theodora Vardouli

    These are great points. I was very intrigued by your idea of a “narrative” CAD tool! What it would look like really lets one’s imagination run! Regarding monumental VS “everyday” architecture, you are right that “opening” the latter seems much more plausible. It is interesting that housing has been so far the focus of most theories for user empowerment in design. It is personal, has a manageable scale and it is omnipresent.
    I think the argument takes an interesting turn when one uses “open” not to discuss so much the remixing of a project but also its original design. For example, how does one conceive of platforms that allow multiple users to participate in the design of a project so that the “final” outcome is a result of their collective design decisions, a multi-authored artifact? I think there is a lot to learn from open source communities in this direction as well.

Comments are closed.