The above animation takes us through the 3D fractal space. This was created with Mandelbulb 3D and Premiere Pro.
I found some difficulties animating with Mandelbulb. It’s quite difficult to see from a quick render preview what effect changes to the formulas will have, and the changes ended up being very abrupt in the animation. It’s also not ideal that the output is a series of rendered images that then have to be turned into the animation in separate software. I am also looking for a level of control over the output that I am finding tricky to achieve in Mandelbulb. My next step will be to try to export the base 3D model to a different software to work further with it.
The feedback and discussion from this weeks tutorial was as follows:
Issues of scale remain – needs to be resolved in order to create the folly
The folly looks parasitic, like ivy. Could it cling onto other parts of the site or should it be freestanding?
Animation of the system – how does it move and reconfigure itself? Could it respond to people as they move around the space via sensor? Thus the experience of the folly will change each time. The reconfiguration movement of the geometry be based on a precedent from nature
Simplify the geometries and choose components to bring into the next development model
Contrast between interior and exterior?
Elevation from the ground plane
Roger Penrose drawings
Star Trek: Beyond
Philip Beesley – Protocell Mesh
Henri Labrouste – Bibliotheque Nationale (mixed qualities of nature and the mechanical)
Charles Stross – Accelerando (technological singularity, machines that build themselves and have a desire to consume materials)
Using Mandelbulb 3D I was able to create the kind of organic and complex shapes I was aiming towards in my earlier work.
I particularly like the first image as to me it suggests movement, and the effect is of something both natural and mechanical. This contrast is something I’d like to accentuate through animation and the type of movement of the geometry.
My intention was to use photogrammetry to create 3D models of actual parts of human anatomy, through visiting anatomy museums. However, it will take several weeks to get the research passes needed. Unfortunately the best anatomy museums are open to medical professionals only, so I may have to rethink my modelling strategy! However, I was able to visit the Wellcome Collection, which did provide some inspiration.
Clockwise from left: ‘Body Slice’ from the Institute of Plastination -a section of the human body preserved by plastination, in which water in the body is replaced with plastic; ‘Sense’ by Annie Cattrell – sculpture created from MRI images of brain activity when one of the five senses is activated; an early prosthetic arm.
One thought-provoking item in the collection was the skull of a human who had undergone trepanation. Although trepanation has mainly been used in medical practice, there also exists an idea, put forward by Dr Bart Hughes in 1962, that a person’s state of consciousness is related to the volume of blood in the brain. He proposed that this could be increased through trepanation, leading to a higher state of consciousness as experienced by young children before their skull is fully sealed. Of course, no evidence that this is actually the case has ever been found.
It was discussed after the last tutorial that one way of achieving 3D models of the organic shapes from my collage would be to use photogrammetry, ie. using photographs to create a 3D model.
To learn the process, I took a series of photographs around a plant (chosen as it was sufficiently detailed and had organic shapes). I then uploaded the photos to Autodesk ReCap 360 to create a point cloud model of the object.
The model, once rendered, was missing some parts due to some of the photos not being stitched together, and also ran into some other issues. Although I photographed the object on a uniform background, changes in light as I moved around it meant that in some of the photos the background was not recognised to be the same. I also discovered that it is best to place a marker before taking the photographs, in order to allow the software to stitch the images together more accurately. After this test I learnt how to stitch together images manually for the program to add to the model. Despite the process not working as expected, I do quite like the final model, which looks rather surreal.