HSS8121 Ice Age

In order to create a video for the show I decided to use fake snow to sprinkle on the model and I decided to add giant ice cubes to the work to show giant hail storms.

I wanted the models to represent where all the different buildings were around the Star and Shadow so that it was easier for the audience to recognise.

I used a Panasonic FZ2000 camera to capture the footage on and I used Adobe Premier Pro to put the video together on. I also added sound effects of wind and hail to add to the effect, and I also changed the filer of the video to make the work feel colder and to make it look as if the model was being effected by a bad snow storm.

This is the video I presented during the installation night at The Late Shows – https://vimeo.com/340643160

HSS8121 Installation Piece

Our installation was to be held in the meeting room at Star and Shadow. The meeting room was a good area to install our work, as there were plenty of tables to choose from and the room itself was quite spacious, meaning that when it came to the opening evening there would be lots of room for the audience to move in.

However, we did not need all of the tables and chairs, meaning me and my team had to move all of the furniture out of the room and use the remaining tables we wanted to show the installation on.

We also created a map of roughly where we wanted our installation to be displayed and how we wanted the audience to move around the work.

Corridor leading to installation piece
Video installations
Model installation on table – set out in the same way as on the videos

Our final set up shows arrows and posters encouraging posters to come visit the installation with where we were situated. Once members of the public entered the room they would first see the model displayed on the table covered in some of the items used for some of the topics featured within the work, such as fake snow. Audience members were able to walk around the model and from this the audience could walk to either end of the room to see what was happening. At one end of the room was two tables with two projector screens displayed on the wall. Here two videos would play for half an hour at a time with one video at a time being interactive. Each video contained sound and four different themes.

For instance, the video on global warming showed the model on the table being subjected to flooding. Audience members were greeted with a question proposed to them on a screen and they had to select an answer to that question by pressing a button. Depending on what answer they went for the video would either show water levels rising or decreasing on the video.

On the other side of the room was a set of whiteboards where the audience could write down their thoughts relating to climate change after watching the videos.

We also made different versions of posters and promoted these through a Facebook event as well days before the show.

Facebook Event

This was a good way of showing audience interaction within our installation piece as the space was used wisely so that members of the public were able to move around freely while looking at the work, and getting involved with deciding how the videos should play out and writing their feedback as well.

DMS8013 Project Item 3

For this project I wanted to explore image generators through the use of photo app filters. Photo app filers such as Snapchat have become popular in recent years and I wanted to explore the use of these filters and apply them to a generator. As well as Snapchat other apps such as Facebook and Instagram have grown in popularity with the use for filters.

I created two different versions of using data and changing there appearances through using an image generator called Deep Dream Generator and through a facial recognition coding on Python.

For the Deep Dream Generator I used photos I had taken using the photo filter apps and put them through this generator to see what affects were applied to it.

Photo filter using social media apps such as Snapchat
Using the same image and applying it to the Deep Dream Generator

I also used facial recognition codes through Python as well. For this I took the filtered images and put it through different facial recognition codes such as digital makeup, which would try to recognise where the facial features were on the face and apply makeup to it.

Image using photo filter app such as Snapchat
Same image but applying a digital makeup code on to the existing image, and by using facial recognition to pin point where certain features are on the face to apply makeup to it.

This was the coordinates used to generate the new image through Python:

hsacs-097039:/ student$ cd

hsacs-097039:~ student$ cd /

hsacs-097039:/ student$ cd /Users/student/Desktop/face_recognition-master/examples 

hsacs-097039:examples student$ python3 /Users/student/Desktop/face_recognition-master/examples/digital_makeup.py 

hsacs-097039:examples student$ python3 /Users/student/Desktop/face_recognition-master/examples/digital_makeup.py 

hsacs-097039:examples student$ python3 /Users/student/Desktop/face_recognition-master/examples/digital_makeup.py 

hsacs-097039:examples student$ python3 digital_makeup.py

hsacs-097039:examples student$ cd

hsacs-097039:~ student$ cd /

hsacs-097039:/ student$ cd /Users/student/Desktop/face_recognition-master/examples 

hsacs-097039:examples student$ python3 digital_makeup.py

hsacs-097039:examples student$ /Users/student/Desktop/face_recognition-master/examples/digital_makeup.py 

from: can’t read /var/mail/PIL

/Users/student/Desktop/face_recognition-master/examples/digital_makeup.py: line 4: import: command not found

/Users/student/Desktop/face_recognition-master/examples/digital_makeup.py: line 7: syntax error near unexpected token `(‘

/Users/student/Desktop/face_recognition-master/examples/digital_makeup.py: line 7: `image = face_recognition.load_image_file(‘IMG_1438.JPG’)’

hsacs-097039:examples student$ python digital_makeup.py

Traceback (most recent call last):

  File “digital_makeup.py”, line 4, in <module>   

import face_recognition

DMS8013 Project Item 2

For this item I decided to create a series of boxes with a laser cutter. With these boxes I wanted to show household items/everyday items people take for granted by placing them in boxes of which you cannot access. I did this as a way of showing affordance and false affordance within diegetic prototyping.

Hologram of bear

The idea of this was for the hologram to appear in the centre of the box and for their to be multiple boxes to show different versions of this, however I only had one screen to project these onto.

I also included an ultrasonic sensor in one of the boxes to show what if movement was restricted and put inside a box. Here I monitored how high and low the numbers went when an object or person walked passed the box.

Ultrasonic sensor to control movement

I used this code for the sensor:

// defines pins numbers

const int trigPin = 9;

const int echoPin = 10;

// defines variables

long duration;

int distance;

void setup() {

pinMode(trigPin, OUTPUT); // Sets the trigPin as an Output

pinMode(echoPin, INPUT); // Sets the echoPin as an Input

Serial.begin(9600); // Starts the serial communication

}

void loop() {

// Clears the trigPin

digitalWrite(trigPin, LOW);

delayMicroseconds(2);

// Sets the trigPin on HIGH state for 10 micro seconds

digitalWrite(trigPin, HIGH);

delayMicroseconds(10);

digitalWrite(trigPin, LOW);

// Reads the echoPin, returns the sound wave travel time in microseconds

duration = pulseIn(echoPin, HIGH);

// Calculating the distance

distance= duration*0.034/2;

// Prints the distance on the Serial Monitor

Serial.print(“Distance: “);

Serial.println(distance);

}

This project demonstrates the idea of affordance through how people see an object and how they know what to do with that object. But when that object is placed in a box that no one can touch it is harder to be able to use that object and only look. This also shows a sense of false affordance as the objects are not actually there and instead are computer generated holograms.

To improve upon this I would consider adding a sensor where when people walk passed sounds such as birds and traffic would start to play as another form of affordance within prototyping.

DMS8013 Project item 1

As part of this module I had to produce 3 separate small projects. For this first project I decided to look into different methods of communication such as Morse code and by looking at ways of how I could implement Morse code into this through modern day coding software’s such as Arduino and what I could create from it.

I decided to look at the game battleships and use Morse code with this. I looked at building the game out of Lego to show creativity and a way for the audience to engage with the piece more.

Lego battleships

The game is connected by two Arduino boards at the end, each opponent has one each. At the ends of the board is a light which blinks when wires touch the board. The light only blinks when the wires touch the black conductive ink on the ships. When the light blinks it sends a set of coordinates through the use of Morse code for the opponent to guess where the ships are. The game is still the same in the sense that the opponent has to sink the other members ships before them in order to win.

This is the Arduino code I used to create this:

//declare a variable to hold the pin number
int ledPin = 9;

int inputPin = A0;

int sensorValue = 1000;

void setup() {
// put your setup code here, to run once:

// initialize serial communication at 9600 bits per second:
Serial.begin(9600);
pinMode(ledPin, OUTPUT);
pinMode(inputPin, INPUT);

}

void loop() {
// put your main code here, to run repeatedly:

sensorValue = analogRead(inputPin);

//sensorValue = 300;

Serial.println(sensorValue);
delay(100);

if (sensorValue > 1020) {
//blink f5
digitalWrite(ledPin, HIGH);
delay(200);

//blink f5
digitalWrite(ledPin, LOW);
delay(200);

// //blink f5
// digitalWrite(ledPin, HIGH);
// delay(100);
//
// //blink f5
// digitalWrite(ledPin, HIGH);
// delay(50);
//
// //blink f5
// digitalWrite(ledPin, HIGH);
// delay(50);
//
// //blink f5
// digitalWrite(ledPin, HIGH);
// delay(50);
//
// //blink f5
// digitalWrite(ledPin, HIGH);
// delay(50);
//
// //blink f5
// digitalWrite(ledPin, HIGH);
// delay(50);
//
// //blink f5
// digitalWrite(ledPin, HIGH);
// delay(50);
}

else {
digitalWrite(ledPin, LOW);
}
}

I also made an alternative way of making the light blink without the conductive ink through using a pontometer on the board.

This item was fun to make as I was able to explore my creativity and ways of how I could incorporate technology into a board game.

DMS8013 3D Processing

This weeks seminar looked at how to create a 3D shape within Processing which would then be used to print out on a 3D printer.

To make this I imported a 3D shape already made into Processing and used this code to run the 3D shape moving around.

/**
Knot in 3D
Demonstrating the Shapes3D library.

created by Peter Lager

Shapes 3D
OBJ export
*/

import shapes3d.utils.; import shapes3d.animation.;
import shapes3d.; import nervoussystem.obj.;

boolean record;

private BezTube btube;
private Toroid toroid1;
private Rot rot1;
private float[] ang1;

// ###################################################
// TOROID SPEED CONTROL VARIABLES
private float speed = 0.003f;
private float t = 0, dt = speed;

// The greater segs the smoother the curve
int segs = 100, slices = 8;

float angleX, angleY, angleZ; // rotation of 3D shape

void setup() {
size(800, 800, P3D);
btube = makeBezTube();

// ################################################
// MAKE TOROID CODE
toroid1 = new Toroid(this, 6, 20);
toroid1.moveTo(btube.getPoint(0));
toroid1.fill(color(255, 96, 96));
toroid1.stroke(color(255, 255, 0));
toroid1.strokeWeight(1.2f);
toroid1.setRadius(5, 3, 20.0f);
toroid1.drawMode(Shape3D.SOLID | Shape3D.WIRE);
// ################################################
}

void draw() {
background(0);
if (record) {
beginRecord(“nervoussystem.obj.OBJExport”, “filename.obj”);
}

angleX += radians(0.913f);
angleY += radians(0.799f);
angleZ += radians(1.213f);

camera(0, 0, 250, 0, 0, 0, 0, 1, 0);
rotateX(angleX);
rotateY(angleY);
rotateZ(angleZ);
// ################################################
// CODE TO MOVE TOROID ALONG TUBE
// Calculate parametric position along tube
t += dt;
if (t >= 1.0f) {
t = 0.99999f;
dt = -speed;
}
else if ( t<0.0f) {
t = 0.0f;
dt = speed;
}
// Get position and rotation for torroid
rot1 = new Rot(new PVector(0, 1, 0), btube.getTangent(t));
ang1 = rot1.getAngles(RotOrder.XYZ);
// Move toroid and orient to tube tangent
toroid1.moveTo(btube.getPoint(t));
toroid1.rotateTo(ang1);
toroid1.draw();
// END OF TOROID MOVE CODE
// ################################################
btube.draw();

if (record) {
endRecord();
record = false;
}
}

public BezTube makeBezTube() {
PVector[] p = new PVector[] {
new PVector(-143.69f, 35.0f, -40.81f),
new PVector(-70.35f, 4.39f, -40.44f),
new PVector(-5.35f, -18.33f, -9.69f),
new PVector(26.15f, -77.13f, -3.69f),
new PVector(70.15f, -111.38f, 40.81f),
new PVector(107.15f, -97.88f, 39.81f),
new PVector(115.4f, -72.88f, 27.81f),
new PVector(124.15f, -54.33f, 6.81f),
new PVector(104.15f, 39.67f, -9.69f),
new PVector(66.4f, 77.62f, -61.19f),
new PVector(40.4f, 119.62f, -67.69f),
new PVector(18.15f, 125.52f, -29.69f),
new PVector(-1.6f, 118.62f, -14.94f),
new PVector(-23.6f, 110.52f, -0.19f),
new PVector(-54.69f, 62.25f, 23.19f),
new PVector(-131.48f, -25.83f, -11.29f),
new PVector(-133.69f, -67.61f, 16.81f),
new PVector(-117.69f, -108.61f, 42.81f),
new PVector(-85.69f, -125.61f, -23.81f),
new PVector(-36.69f, -119.61f, -71.19f),
new PVector(13.15f, -78.13f, -63.69f),
new PVector(17.03f, -10.14f, -51.69f),
new PVector(26.15f, -20.88f, -3.69f),
new PVector(37.65f, -3.33f, 11.31f),
new PVector(96.15f, 15.87f, 16.31f),
new PVector(143.69f, 35.39f, 40.81f)
};

BezTube bt = new BezTube(this, new P_Bezier3D(p, p.length), 10.0f, segs, slices);

bt.fill(color(32, 32, 180));
bt.stroke(color(64, 180, 180));
bt.strokeWeight(1.50f);
bt.drawMode(Shape3D.SOLID | Shape3D.WIRE);

bt.fill(color(150, 255, 255), BezTube.BOTH_CAP);
bt.drawMode(Shape3D.SOLID, BezTube.BOTH_CAP);

return bt;
}

void keyPressed() {
if (key == ‘r’) {
record = true;
}
}

When running the programme this shape would appear moving around to show all sides of the shape.

3D Shape Generator using Processing

HSS8121 Installation Evaluation

The installation is based on climate change and how recent changes in the weather have had an effect on areas in recent years and what this could mean in the future. The installation focuses on Newcastle Upon Tyne, particularly Ouseburn where the installation was held for a one night cultural event called The Late Shows. The content for the installation was to show an installation piece of work which involves audience interaction and relating to the culture of the area we were to present in, and show this to both artists, local people and visitors of the Star and Shadow.

The model itself was built on a 3D printer and took template examples of the buildings that represent where the Star and Shadow is. For instance, there was the building of the Star and Shadow itself, high rise flat buildings, houses and business buildings. This project was made over a period of time and presented in a one night event at the Star and Shadow.

This idea came about after a seminar class where in groups we put together a collage of pictures of Ouseburn and create a new picture out of them. This made me think of an idea to show how nature has been effected by buildings appearing all over the city and how this is changing the landscape. This soon evolved into looking at how climate change could impact this and the idea of nature fighting back.

The installation works with having the model in the middle of a table so that members of the public could walk around the model. This then led onto the audience looking at two videos which appeared on the wall. One of which had a screen with a question on and buttons to choose from. When audience members pressed a button the video would do an action such as see the model on the screen get blown up. Audience members were also given the opportunity to leave feedback on a white board of their thoughts on climate change. The work fits into the cultural side well as it is showing a possible future of what could happen to the world as these extreme weathers continue to happen. Especially in areas such as Ouseburn.

If the installation were to go wrong such as the codes not working to allow audience members to interact with the videos, then we would jut show the videos as they were and let the audience decide what videos they want to watch to still show some form of interaction, but would be very limited. Other alternatives would be to book other equipment out to help up such as a spare projector, arduino sets and projector screen.

The installation meets the criteria of making an installation which shows cultural heritage in the way of focusing on the area of Ouseburn and multiple ways the audience can interact with the work. Through the use of being able to move around freely, press buttons, view a display of Ouseburn and allowing the audience to write their feedback of their thoughts of climate change. The installation is convincing in the sense that it makes the audience think about what implications climate change could have an effect on people and locations in the future.

This installation could be applied to a number of exhibitions and areas as this could effect any area.

To improve on this in future I would like to add more elements to the work such as make the videos more interactive as not all of them were in the end. I would also have materials such as moss and ice cubes on the table to allow the audience to pick up and feel to give them a sense of what could happen.

HSS8121 What is Public Making

This week was another student lead seminar, this time discussing the term of Public Making and what this term means and represents.

Public Making can be described as a creative process to build artists work and installations within a particular place, mainly within museums or cultral herigate sites which involves audience interaction. Artists work within this process of making can take many shapes and forms such as both public and digital art and can be seen as either a temporary or permenant structure. They are seen within public events and festivals such as The Late Shows in Newcastle Upon Tyne.

One example of this is the work of Tim Shaw and John Bowers from 2014 where they attended a residency at Ipswich Museum and worked as part of a group to create small multiple installations.

The small pieces of work all come together to create one big installation and shows many interactions with the public. Examples such as the Sonic Microscope, Weather Station and the Rock Harmonium. Each of these examples demonstrates different ways in which the audience can interact with the work by touching, listening, seeing and movement.

Another example is the work of Do Ho Suh ‘Bridging Home’. The work takes the form of a Korean house situated on a pedestrian bridge in London. The installation sets out to describe the tension between what is considered private and public spaces by merging the two together. The work also illustrates what defines a home.

This installation shows a different view towards audience interation as people are able to look at the installation.

I felt I was able to deliver an informative presentation with including information that was not on the screen and was able to include examples of work.

HSS8121 Mapping 2

After our first task of mapping I was required to go away and try and do some mapping on my own.

Over the course of Easter I documented where I went, through the use of photography. Two of the main areas where I tried this was at Ouseburn and Tynemouth. I decided to document these areas as I have not been to Ouseburn properly before and wanted to explore it more, whereas with Tynemouth I wanted to explore more of the areas I travel in car with on foot instead.

Route walking through Shieldfield and Ouseburn

I decided to start my journey walking from Newcastle University towards Northumbria University and towards the Star and Shadow at Ouseburn. Here I took photos along the journey to the Star and Shadow, photographing buildings as I went along.

Star and Shadow, Ouseburn

I then proceeded down towards the Biscuit factory and Cobalt Studios.

From here I walked further into Ouseburn walking towards The Cluny, around the farm and down towards the river.

Whereas with Tynemouth I travelled in the car towards Tynemouth then whilst I was there I recorded my journey through Photographs walking up along the pier and towards Cullercoats.

Route driving towards Tynemouth, then walking from Tynemouth towards Cullercoats along the pier
Tynemouth Beach
Tynemouth Beach

I enjoyed taking photographs along my journey, especially when it came to finding unsual looking buildings and objects. I find photographing theses findings to be fun and make the journey more interesting.