To begin the project, we created a thematic image that would enable us to start exploring our initial concepts as well as methods of visual representation that would best fit said concepts, and techniques that could be used.
My project will explore how architecture can be responsive or interactive based on the use of algorithms. The algorithms may drive the architecture from the largest scale, ie. the entire structure, to the scale of smaller interactive spaces or responsive facades.
The thematic image imagines a scene of a responsive city in a post-technological singularity world, where each building has modular, reconfigurable components that may rearrange themselves based on the shifting needs of the community. The buildings incorporate facades that display changing images that would appeal to the individual user, creating an experience of the city that is highly personal. In the background we see a visual representation of the flow of information across the city that is collected by the buildings and inhabitants and then fed back into the wider system, enabling it to constantly transform and improve itself.
The base model was created in 3dsMax using the Greeble plugin1, which takes a base form and procedurally generates complexity, controlled by parameters. This seemed an appropriate technique, as in the imagined city an algorithm may work in a very similar way – the creator programs certain parameters, and then the algorithm shifts the building to the best configuration for the time, having free reign within those parameters.
As I was making the image, one question that occurred was how the responsive screens may be used. My immediate reaction was advertisements; as such I placed a combination of classic adverts by well-known corporations amongst the other abstract dynamic images, which my imagined inhabitant would enjoy seeing in his personal version of the city. However, after the tutorial feedback questioning the somewhat sinister nature of this, and whether the screens could not encourage positive behavioural change (such as education), I realised that ultimately the question of what the screens will display will be answered by deciding who controls the algorithm. In the imagined post-singularity timeline, AI will be intelligent enough to know what to display to each user to improve their quality of life, and there is no reason to think that a future society will be similar enough to today to allow the buying of these spaces by corporations (or indeed that consumerism will even be encouraged).
The idea of behavioural conditioning was a key point of discussion in the tutorial feedback. One reference was the work of Edward Bernays, the ‘father of public relations’, who applied propaganda techniques to advertisement to control the opinions and buying habits of the general public. The impact of this was vast; at the worst extreme, his theories were used by Joseph Goebbels, the minister of propaganda for the Third Reich. In an improved world, behavioural conditioning would only be used for the benefit of the individual and overall society, although the idea of manipulation of any kind is still somewhat distasteful – generally the thought that we are entirely free to make our own choices is extremely valued.
However, others believe that our choices are already not free; it is only the narrating self that justifies them as such. In his speculative work ‘Homo Deus: A Brief History of Tomorrow’, Yuval Harari explores this idea with reference to studies on patients with split-brain syndrome, in which the bond between the two hemispheres of the brain is severed. In such patients, each hemisphere has different perceptions and desires. As the left and right visual fields are processed by different sides of the brain, this can be used to communicate with each side individually. In certain experiments the non-verbal right side of the brain was asked something and responded – for example, one patient was showed something funny – but when asked to explain their response, such as why they laughed, the left side verbal narrator (which acts to interprets the world) did not know the real reason, and would often invent a rational-sounding justification for the behaviour. With these examples (amongst others, such as studies showing brain activity occurring before a person registers desire – ie. if a person moves their arm, their brain begins the necessary activity before the person feels any desire or intention to move, although they feel that they moved simply because their conscious self wanted to do so), Harari argues that our internal narrator/interpreter often creates false justifications for our behaviours, including the justification that we behaved a certain way because of our own free choice. The implication for the project (if it was to deal with this issue) is which part of us the algorithms should be responding to – is it the narrating self and its potentially false idea of our choices, or is it the deeper subconscious level of the brain from which desire and behaviour may actually originate?
In terms of the visual representation, the feedback was that instead of a single-person perspective, it would be more interesting to see how the spaces worked with multiple users and how different elements may be layered in such a situation. It could be that the personal response of the displays would be within an augmented reality rather than physical. There is also the question of how this will look from the external perspective – I will next begin describing the spaces from other views such as the axonometric. I also need to create diagrams and infographics to explain the system as well as techniques used.
Further references to look into:
– Discognition by Steven Shaviro
– The work of Donna Haraway
– Walden Two by B. F. Skinner
– Anti-Oedipus: Capitalism and Schizophrenia by Gilles Deleuze and Felix Guattari
1. Greeble Plugin created by Tom Hudson – found at http://www.max.klanky.com