Tag Archives: 3D modelling

Complexity Tests

1

2

3

Testing the Greeble plugin with sets of overlaid cubes. The base cube was removed with only the ‘widgets’ (smaller add-ons to the cubes) remaining – the sizes and parameters of these widgets were then adjusted to create the overall effect.

Advertisements

Thematic Drawing & Feedback

To begin the project, we created a thematic image that would enable us to start exploring our initial concepts as well as methods of visual representation that would best fit said concepts, and techniques that could be used.

thematic 5.jpg
Thematic Drawing – Full Image

My project will explore how architecture can be responsive or interactive based on the use of algorithms. The algorithms may drive the architecture from the largest scale, ie. the entire structure, to the scale of smaller interactive spaces or responsive facades.

The thematic image imagines a scene of a responsive city in a post-technological singularity world, where each building has modular, reconfigurable components that may rearrange themselves based on the shifting needs of the community. The buildings incorporate facades that display changing images that would appeal to the individual user, creating an experience of the city that is highly personal. In the background we see a visual representation of the flow of information across the city that is collected by the buildings and inhabitants and then fed back into the wider system, enabling it to constantly transform and improve itself.

The base model was created in 3dsMax using the Greeble plugin1, which takes a base form and procedurally generates complexity, controlled by parameters. This seemed an appropriate technique, as in the imagined city an algorithm may work in a very similar way – the creator programs certain parameters, and then the algorithm shifts the building to the best configuration for the time, having free reign within those parameters.

As I was making the image, one question that occurred was how the responsive screens may be used. My immediate reaction was advertisements; as such I placed a combination of classic adverts by well-known corporations amongst the other abstract dynamic images, which my imagined inhabitant would enjoy seeing in his personal version of the city. However, after the tutorial feedback questioning the somewhat sinister nature of this, and whether the screens could not encourage positive behavioural change (such as education), I realised that ultimately the question of what the screens will display will be answered by deciding who controls the algorithm. In the imagined post-singularity timeline, AI will be intelligent enough to know what to display to each user to improve their quality of life, and there is no reason to think that a future society will be similar enough to today to allow the buying of these spaces by corporations (or indeed that consumerism will even be encouraged).

The idea of behavioural conditioning was a key point of discussion in the tutorial feedback. One reference was the work of Edward Bernays, the ‘father of public relations’, who applied propaganda techniques to advertisement to control the opinions and buying habits of the general public. The impact of this was vast; at the worst extreme, his theories were used by Joseph Goebbels, the minister of propaganda for the Third Reich. In an improved world, behavioural conditioning would only be used for the benefit of the individual and overall society, although the idea of manipulation of any kind is still somewhat distasteful – generally the thought that we are entirely free to make our own choices is extremely valued.

However, others believe that our choices are already not free; it is only the narrating self that justifies them as such. In his speculative work ‘Homo Deus: A Brief History of Tomorrow’, Yuval Harari explores this idea with reference to studies on patients with split-brain syndrome, in which the bond between the two hemispheres of the brain is severed. In such patients, each hemisphere has different perceptions and desires. As the left and right visual fields are processed by different sides of the brain, this can be used to communicate with each side individually. In certain experiments the non-verbal right side of the brain was asked something and responded – for example, one patient was showed something funny – but when asked to explain their response, such as why they laughed, the left side verbal narrator (which acts to interprets the world) did not know the real reason, and would often invent a rational-sounding justification for the behaviour. With these examples (amongst others, such as studies showing brain activity occurring before a person registers desire – ie. if a person moves their arm, their brain begins the necessary activity before the person feels any desire or intention to move, although they feel that they moved simply because their conscious self wanted to do so), Harari argues that our internal narrator/interpreter often creates false justifications for our behaviours, including the justification that we behaved a certain way because of our own free choice. The implication for the project (if it was to deal with this issue) is which part of us the algorithms should be responding to – is it the narrating self and its potentially false idea of our choices, or is it the deeper subconscious level of the brain from which desire and behaviour may actually originate?

In terms of the visual representation, the feedback was that instead of a single-person perspective, it would be more interesting to see how the spaces worked with multiple users and how different elements may be layered in such a situation. It could be that the personal response of the displays would be within an augmented reality rather than physical. There is also the question of how this will look from the external perspective – I will next begin describing the spaces from other views such as the axonometric. I also need to create diagrams and infographics to explain the system as well as techniques used.

Further references to look into:

– Discognition by Steven Shaviro
– The work of Donna Haraway
– Walden Two by B. F. Skinner
– Anti-Oedipus: Capitalism and Schizophrenia by Gilles Deleuze and Felix Guattari

Notes

1. Greeble Plugin created by Tom Hudson – found at http://www.max.klanky.com

Palace Facade Development

To incorporate elements of the earlier folly work into the building design, I created a facade using similar techniques of exporting a model from fractal modelling software, and then manipulating this model to create facade louvres.

2
Precedent from: http://qastic.blogspot.co.uk/

The above precedent inspired me to turn the fractal model into louvres that would create an undulating effect across the facade, allowing different amounts of light in to the spaces where necessary.

PROCESS:
  1. Initial fractal model, exported using voxel slices and then rebuilt as an .obj using Rhino.
test 2
Step 1.

2. Splitting the model into contours. At this stage the model file required a lot of cleaning up to remove pieces that were unattached to the whole.

test3
Step 2.

3. After cleaning up the file, the model was ready to be split into smaller elements to create louvres.

test1
Step 3

4. The final louvre design was then placed onto the building facade.

facaade 1

Fractal Mesh

exportedmodelview2_perspective

top

I was finally able to export a test segment of my fractal 3D model from Mandelbulb to Rhino. This was done via a process of creating small slices of the model via Voxel slices, linking the slice sequence together in Fiji (ImageJ) and then creating an obj file.

Next steps: choose and export more components from the fractal model, and use these to create the folly as it will sit on the site.

Photogrammetry Test

It was discussed after the last tutorial that one way of achieving 3D models of the organic shapes from my collage would be to use photogrammetry, ie. using photographs to create a 3D model.

To learn the process, I took a series of photographs around a plant (chosen as it was sufficiently detailed and had organic shapes). I then uploaded the photos to Autodesk ReCap 360 to create a point cloud model of the object.

test2

The model, once rendered, was missing some parts due to some of the photos not being stitched together, and also ran into some other issues. Although I photographed the object on a uniform background, changes in light as I moved around it meant that in some of the photos the background was not recognised to be the same. I also discovered that it is best to place a marker before taking the photographs, in order to allow the software to stitch the images together more accurately. After this test I learnt how to stitch together images manually for the program to add to the model. Despite the process not working as expected, I do quite like the final model, which looks rather surreal.

 

Rhino Tutorial

28.09.16

In the first tutorial we were given a crash course in Rhino, quickly covering the basics and then learning a more advanced technique of adding a pattern onto an organic shape. This was my third time using Rhino, however I found it fairly simple as it has a number of similarities to other software such as AutoCAD.

We were able to place the pattern onto the surface using the ‘Unroll surface’ and ‘Flow to surface’ commands, and then used the plugin Multiview Capture to export images with transparent backgrounds, although there were some issues with achieving full transparency. We also used Grasshopper to add the pattern to a more complex, irregular curved shape.

4
Image as exported using MultiView Capture

rendered1_1

render3_6
After the pattern had been created, we used other tools such as bend and twist to manipulate the model.

render_7_baked

baked1_7.PNG
The Grasshopper plugin enabled us to add the pattern to a more complex geometry.