ITP Travels (Project Proposal)

ITP is an inherently diverse place. We come from all over the world and are attracted to the unknown, to adventure and to exploration. As a result of our geographic and personal diversity, I am interested in building an travel directory for the student body. For example, say I want to visit parts of South America. I know off hand that there are several current students, let alone alums, from that region, but I don't know specifically if their interests match up with mine. By creating a database that contains the interests and places of origin of our collective travels, we can filter recommendations specifically tailored to our interests and might discover that there are mini-travel guides or experts waiting to contribute to our next adventure.

Furthermore, the network could potentially be visualized by an a map that geo-tags the places we're from, the places we've visited and love and links to potential recommended attractions.

photo-3
photo-3

itpTravels = { 'Name' : 'Colin Narver', 'From' : 'Seattle', 'Primary language spoken' : 'English', 'Can speak some...' : ['French', 'Italian'], 'Age' : '27', 'Favorite_countries_visited' : ['Vietnam', 'Italy', 'Taiwan', 'France', 'Scotland', 'Switzerland'], 'Favorite_cities_visited' : ['Hanoi', 'Ho Chi Minh', 'Taipei', 'Nice', 'Edinburgh', 'Interlocken', 'Cinque Terre', 'Venice'], 'I_know_most_about' : ['Hiking', 'Beer', 'Hostels', 'Beaches', 'Cheap Sightseeing'], 'Countries_I_want_to_visit' : ['South Africa', 'Mongolia', 'Russia', 'Morocco', 'Argentina', 'Chile'], 'Cities_I_want_to_visit' : ['Johannesburg', 'St. Petersburg', 'Marakesh', 'Santiago']

}

Streams of Consciousness

NOC Final Project from Colin Narver on Vimeo.

//Colin Narver //Streams of Consciousness //Nature of Code Final Project //Special thanks to Dan Shiffman and Lia Martinez //for all their help and wisdom along the way import jsyphon.*; import SimpleOpenNI.*; SimpleOpenNI context; boolean autoCalib=true; import codeanticode.syphon.*; PGraphics canvas; SyphonServer server; PImage sky; PImage grass; PImage ocean; PImage droplet; PImage finalImg; PImage kinect; int[] depthValues; // Using this variable to decide de whether to draw all the stuff boolean debug = true; // Flowfield object FlowField flowfield; // An ArrayList of vehicles ArrayList vehicles; PImage background; void setup() { //size(context.depthWidth(), context.depthHeight(),P3D); size(640, 480, P3D); context = new SimpleOpenNI(this); //background = loadImage("dog.jpg"); // Make a new flow field with "resolution" of 16 // enable depthMap generation if (context.enableDepth() == false) { println("Can't open the depthMap, maybe the camera is not connected!"); exit(); return; } canvas = createGraphics(640, 480, P3D); server = new SyphonServer(this, "Processing Syphon"); background(200, 0, 0); droplet = loadImage("blue_better_1.png"); grass = loadImage("grass.png"); ocean = loadImage("ocean.png"); sky = loadImage("sky.png"); finalImg = createImage(320, 240, RGB); vehicles = new ArrayList(); // Make a whole bunch of vehicles with random maxspeed and maxforce values for (int i = 0; i < 400; i++) { vehicles.add(new Vehicle(canvas, new PVector(random(100, width-100), random(100, height-100)), random(2, 5), random(0.1, 0.5))); } stroke(0, 0, 255); strokeWeight(3); smooth(); kinect = createImage(640, 480, RGB); flowfield = new FlowField(40); //adjust this to affect resolution grass.loadPixels(); ocean.loadPixels(); sky.loadPixels(); } void draw() { //restrict area that kinect can see to just sandbox kinect.copy(context.depthImage(), 100, 120, 300, 220, 0, 0, 640, 480); //kinect.copy(context.depthImage(), 140, 155, 375, 260, 0, 0, 640, 480); kinect.updatePixels(); println("MOUSE X: " + mouseX + " MOUSEY: " + mouseY); // Display the flowfield in "debug" mode //if (debug) flowfield.display(); //depthValues = context.depthMap(); //check every other pixel to save memory for (int px = 0; px < 640; px+=2) { for (int py = 0; py < 480; py+=2) { int depthIndex = px + py * 640; int imgIndex = (px/2) + (py/2) * 320; //for (int i = 0; i < depthValues.length; i++) { float b = brightness(kinect.pixels[depthIndex]); if (b > 0 && b < 135) { finalImg.pixels [imgIndex] = ocean.pixels [depthIndex]; } else if (b >135 && b < 255) { finalImg.pixels [imgIndex] = grass.pixels [depthIndex]; } // else if (b > 150 && b < 200) { // finalImg.pixels [imgIndex] = sky.pixels [depthIndex]; // } // else { // //make pink to test // finalImg.pixels [imgIndex] = color (200, 100, 100); // } } } finalImg.updatePixels(); //finalImg.filter(BLUR); // Tell all the vehicles to follow the flow field for (Vehicle v : vehicles) { v.follow(flowfield); v.run(); } // update the cam context.update(); //context.depthImage().filter(BLUR); flowfield.init(); // draw depthImageMap canvas.beginDraw(); canvas.background(0); //canvas.imageMode (CORNER); //canvas.image(kinect, 0, 0); canvas.image(finalImg, 0, 0,width,height); for (Vehicle v : vehicles) { v.display(); } //syphon canvas.endDraw(); server.sendImage(canvas); image (canvas, 0, 0); //vehicles.add(new Vehicle(canvas, new PVector(random(100, width-100), random(100, height-100)), random(2, 5), random(0.1, 0.5))); // if (vehicles.size() > 200) { // vehicles.remove(0); // } } //void mousePressed() { // // int clickPosition = mouseX + (mouseY * 640); // int clickedDepth = depthValues[clickPosition]; // println(clickedDepth); // // float inches = clickedDepth / 25.4; //} void keyPressed() { if (key == ' ') { debug = !debug; } } // Make a new flowfield //void mousePressed() { // // flowfield.init(); // // vehicles.add(new Vehicle(canvas, new PVector(mouseX, mouseY), random(2, 5), random(0.1, 0.5))); //} void mouseDragged() { // flowfield.init(); vehicles.add(new Vehicle(canvas, new PVector(mouseX, mouseY), random(2, 5), random(0.1, 0.5))); } // The Nature of Code // Daniel Shiffman // http://natureofcode.com // Flow Field Following //what is the vehicle's desired velocity?? //arrow below triangle in flow field indicates vehicle's desired velocity class Vehicle { PImage droplet = loadImage("blue_better_1.png"); // The usual stuff PVector location; PVector velocity; PVector acceleration; float r; float maxforce; // Maximum steering force float maxspeed; // Maximum speed PGraphics can; Vehicle(PGraphics canvas, PVector l, float ms, float mf) { location = l.get(); r = 3.0; maxspeed = ms; maxforce = mf; acceleration = new PVector(0,0); velocity = new PVector(0,0); can = canvas; } public void run() { update(); borders(); //display(); } // Implementing Reynolds' flow field following algorithm // http://www.red3d.com/cwr/steer/FlowFollow.html void follow(FlowField flow) { // What is the vector at that spot in the flow field? PVector desired = flow.lookup(location); // Scale it up by maxspeed desired.mult(maxspeed); // Steering is desired minus velocity PVector steer = PVector.sub(desired, velocity); steer.limit(maxforce); // Limit to maximum steering force applyForce(steer); } void applyForce(PVector force) { // We could add mass here if we want A = F / M acceleration.add(force); } // Method to update location void update() { // Update velocity velocity.add(acceleration); // Limit speed velocity.limit(maxspeed); location.add(velocity); // Reset accelertion to 0 each cycle acceleration.mult(0); } void display() { // Draw a triangle rotated in the direction of velocity float theta = velocity.heading2D() + radians(90); can.image(droplet, location.x, location.y, 40, 40); // fill (82,147,255,150); //// stroke(0); // pushMatrix(); // translate(location.x,location.y); // rotate(theta); // beginShape(TRIANGLES); // vertex(0, -r*3); // vertex(-r*3, r*3); // vertex(r*3, r*3); // endShape(); // popMatrix(); } // Wraparound void borders() { float buffer = 100; //constrain vehicles to area inside sandbox (avoiding noise) location.x = constrain(location.x,120, width-120); location.y = constrain(location.y, 120, height-120); // if (location.x < 100) location.x = width/2; // if (location.y < 100) location.y = height/2; // if (location.x > width-100) location.x = width/2; // if (location.y > height-100) location.y = height/2; } } // Flow Field Following class FlowField { // A flow field is a two dimensional array of PVectors PVector[][] field; int cols, rows; // Columns and Rows int resolution; // How large is each "cell" of the flow field FlowField(int r) { resolution = r; // Determine the number of columns and rows based on sketch's width and height cols = context.depthWidth()/resolution; rows = context.depthHeight()/resolution; println(cols + " " + rows); field = new PVector[cols][rows]; for (int x=0; x< cols; x++) { for (int y=0; y< rows; y++) { field[x][y] = new PVector (0, 0); } } init(); } //This is the section to get to understand best*** void init() { // lia martinez helped with this section kinect.loadPixels(); // we loop by columns and rows to preserve the resolution for (int x=0; x< cols; x++) { for (int y=0; y< rows; y++) { //make an array for just the surrounding pixels float[] areaPixels; areaPixels = new float[9]; //we don't want it to go below 0 or above the maximum or else it will crash if (x > 0 && x < cols -1 && y > 0 && y < rows-1) { // loop through the area pixels for (int i=-1; i<=1; i++) { for (int j=-1; j<=1; j++) { //the index for which pixel we are looking at int readPos = ((y*resolution + j) * kinect.width + (x*resolution + i)); //when we get our pixel, we get the color color c = kinect.pixels[readPos]; //from the color, we get just the brightness (gray) value float b = brightness(c); //writePos is another index for where in the array we write the brightness information int writePos = (j+1) * 3 + (i + 1); areaPixels[writePos] = b; } } //compare first the pixels on the left/right columns, then compare the upper/lower rows. the difference becomes our vector! float dX = (areaPixels[0] + areaPixels[3] + areaPixels[6])/3 - (areaPixels[2] + areaPixels[5] + areaPixels[8])/3; float dY = (areaPixels[0] + areaPixels[1] + areaPixels[2])/3 - (areaPixels[6] + areaPixels[7] + areaPixels[8])/3; //make a new vector based on the difference //field[x][y] = new PVector (dX, dY); field[x][y].x = lerp(field[x][y].x, dX, 0.01); field[x][y].y = lerp(field[x][y].y, dY, 0.01); //normalize, to just get the direction field[x][y].normalize(); } else { //this is to just make a do-nothing vector at the sides of the sketch, since we didn't want any pixel computing done there field[x][y] = new PVector(0, 0); } } } } // eventually you might need to look at a pixel and its neighbors //float theta = map(noise(xoff,yoff),0,1,0,TWO_PI); // Polar to cartesian coordinate transformation to get x and y components of the vector // Draw every vector void display() { for (int x=0; x< cols; x++) { for (int y=0; y< rows; y++) { drawVector(field[x][y], x*resolution, y*resolution, resolution-2); } } } // // // Renders a vector object 'v' as an arrow and a location 'x,y' void drawVector(PVector v, float x, float y, float scayl) { pushMatrix(); float arrowsize = 4; // // Translate to location to render vector translate(x, y); stroke(0, 100); // // Call vector heading function to get direction (note that pointing up is a heading of 0) and rotate rotate(v.heading2D()); // // Calculate length of vector & scale it to be bigger or smaller if necessary float len = v.mag()*scayl; // // Draw three lines to make an arrow (draw pointing up since we've rotate to the proper direction) line(0, 0, len, 0); line(len, 0, len-arrowsize, +arrowsize/2); line(len, 0, len-arrowsize, -arrowsize/2); popMatrix(); } PVector lookup(PVector lookup) { int column = int(constrain(lookup.x/resolution, 0, cols-1)); int row = int(constrain(lookup.y/resolution, 0, rows-1)); return field[column][row].get(); } }

Nature of Code Final Project Proposal

For my nature of code final, I want to make pliable flow fields that a user can manipulate with their hands. My inspiration for this project comes from memories of building castles, moats and other structures out of sand at the beach. The ability to impose a directionality to water by shaping it's surrounding path is something i'd like to replicate. The tools I plan on using for this project are Processing, the Kinect, a projector and twenty-five pounds of white play sand. The idea is that the user will be able to manipulate the positioning of the desired velocity of a continuous flow of water by digging into a box of sand, moving the sand around and playfully seeing how the stream responds to their input. Ideally, the kinect will calculate the depth of the sand and wherever the user digs, the vector stream will elongate, broaden or change direction based on their choices. In essence, the vehicles comprising the stream  will perpetually impose a consistant flow from one edge of the sand to the other.

Dan's flow field following examples are the foundation for this piece. While building my code around this, I casually browsed a few YouTube videos to see what kind of work had been done relevant to my idea. Turns out, I am not the first person to have connected these dots. Nevertheless, I intend to emphasize my own interpretation of these elements in my final project.

Amongst my various logistical concerns, one particular technical/conceptual challenge I am having is figuring out a way to transform the vehicles into something that actually resembles a flowing stream of water. Any insights for how to make this work would be most helpful!

Nature of Code Final Project (Initial Ideas/Sketches)

Nature of Code, Flow Field-Final Project Initial Ideas from Colin Narver on Vimeo.

I am using dan's flow field example (6_4) to test the progression of triangles moving around a flow field with two obstacles, determined by brightness, dictating their path. I am interested in potentially combining this with projection mapping/kinect onto an external texture or surface where the user could dictate the flow of vectors by the way they physically move, shape or build up the environment with their hands.

// Colin Narver (Flow Field Experiments) // Flow Field Following // Via Reynolds: http://www.red3d.com/cwr/steer/FlowFollow.html //Maybe do projection mapping onto a box of sand? // Using this variable to decide whether to draw all the stuff boolean debug = true; // Flowfield object FlowField flowfield; // An ArrayList of vehicles ArrayList vehicles; PImage background; void setup() { size(600, 450, P2D); background = loadImage("Nature_of_code_shapes.jpg"); // Make a new flow field with "resolution" of 16 flowfield = new FlowField(7); //adjust this to affect resolution vehicles = new ArrayList(); // Make a whole bunch of vehicles with random maxspeed and maxforce values //for (int i = 0; i < 120; i++) { //vehicles.add(new Vehicle(new PVector(random(width), random(height)), random(2, 5), random(0.1, 0.5))); //} } void draw() { background(255); imageMode (CORNER); image(background, 0, 0); // Display the flowfield in "debug" mode if (debug) flowfield.display(); // flowfield.update(); for live data later (kinect) // Tell all the vehicles to follow the flow field for (Vehicle v : vehicles) { v.follow(flowfield); v.run(); } // Instructions fill(0); //text("Click the mouse to generate a new flow field.",10,height-20); } void keyPressed() { if (key == ' ') { debug = !debug; } } // Make a new flowfield void mousePressed() { //flowfield.init(); vehicles.add(new Vehicle(new PVector(mouseX,mouseY), random(2, 5), random(0.1, 0.5))); } void mouseDragged() { //flowfield.init(); vehicles.add(new Vehicle(new PVector(mouseX,mouseY), random(2, 5), random(0.1, 0.5))); } // The Nature of Code // Daniel Shiffman // http://natureofcode.com // Flow Field Following //what is the vehicle's desired velocity?? //arrow below triangle in flow field indicates vehicle's desired velocity class Vehicle { // The usual stuff PVector location; PVector velocity; PVector acceleration; float r; float maxforce; // Maximum steering force float maxspeed; // Maximum speed Vehicle(PVector l, float ms, float mf) { location = l.get(); r = 3.0; maxspeed = ms; maxforce = mf; acceleration = new PVector(0,0); velocity = new PVector(0,0); } public void run() { update(); borders(); display(); } // Implementing Reynolds' flow field following algorithm // http://www.red3d.com/cwr/steer/FlowFollow.html void follow(FlowField flow) { // What is the vector at that spot in the flow field? PVector desired = flow.lookup(location); // Scale it up by maxspeed desired.mult(maxspeed); // Steering is desired minus velocity PVector steer = PVector.sub(desired, velocity); steer.limit(maxforce); // Limit to maximum steering force applyForce(steer); } void applyForce(PVector force) { // We could add mass here if we want A = F / M acceleration.add(force); } // Method to update location void update() { // Update velocity velocity.add(acceleration); // Limit speed velocity.limit(maxspeed); location.add(velocity); // Reset accelertion to 0 each cycle acceleration.mult(0); } void display() { // Draw a triangle rotated in the direction of velocity float theta = velocity.heading2D() + radians(90); fill(175); stroke(0); pushMatrix(); translate(location.x,location.y); rotate(theta); beginShape(TRIANGLES); vertex(0, -r*2); vertex(-r, r*2); vertex(r, r*2); endShape(); popMatrix(); } // Wraparound void borders() { if (location.x < -r) location.x = width+r; if (location.y < -r) location.y = height+r; if (location.x > width+r) location.x = -r; if (location.y > height+r) location.y = -r; } } // Flow Field Following class FlowField { // A flow field is a two dimensional array of PVectors PVector[][] field; int cols, rows; // Columns and Rows int resolution; // How large is each "cell" of the flow field FlowField(int r) { resolution = r; // Determine the number of columns and rows based on sketch's width and height cols = width/resolution; rows = height/resolution; println(cols + " " + rows); field = new PVector[cols][rows]; init(); } //This is the section to get to understand best*** void init() { // Reseed noise so we get a new flow field every time // noiseSeed((int)random(10000)); for (int i = 0; i < cols; i++) { for (int j = 0; j < rows; j++) { int x = i*resolution; int y = j*resolution; int index = x + y * background.width; color c = background.pixels[index]; float b = brightness(c); if (b == 255) { field[i][j] = new PVector(2, 1); } else { float theta = map(b, 0, 255, 0, TWO_PI); field[i][j] = new PVector(cos(theta), sin(theta)); } } } } // eventually you might need to look at a pixel and its neighbors //float theta = map(noise(xoff,yoff),0,1,0,TWO_PI); // Polar to cartesian coordinate transformation to get x and y components of the vector // Draw every vector void display() { for (int i = 0; i < cols; i++) { for (int j = 0; j < rows; j++) { drawVector(field[i][j], i*resolution, j*resolution, resolution-2); } } } // // // Renders a vector object 'v' as an arrow and a location 'x,y' void drawVector(PVector v, float x, float y, float scayl) { pushMatrix(); float arrowsize = 4; // // Translate to location to render vector translate(x, y); stroke(0, 100); // // Call vector heading function to get direction (note that pointing up is a heading of 0) and rotate rotate(v.heading2D()); // // Calculate length of vector & scale it to be bigger or smaller if necessary float len = v.mag()*scayl; // // Draw three lines to make an arrow (draw pointing up since we've rotate to the proper direction) line(0, 0, len, 0); line(len, 0, len-arrowsize, +arrowsize/2); line(len, 0, len-arrowsize, -arrowsize/2); popMatrix(); } PVector lookup(PVector lookup) { int column = int(constrain(lookup.x/resolution, 0, cols-1)); int row = int(constrain(lookup.y/resolution, 0, rows-1)); return field[column][row].get(); } }

Voodoo Bear, ITP Winter Show 2012 Documentation

Voodoo Bear is a project that came out of the Physical Computing class at ITP. Our intent was to create an intuitive, engaging and unique experience that blends the traditional, expected interaction one typically enjoys with a cuddly teddy bear with the playful, semi-nefarious dealings of a voodoo doll. The experience begins when the user submits their twitter handle into the iPad. They are then encouraged to do whatever they please with the bear.

We built switches in the hands, ears and feet, along with flex sensors in the front and back of the bear that are linked to a text to speech module through an Arduino. When the user hugs the bear (or squeezes his ears, feet or paws) he says a positive (albeit, snarky) spoken response. Conversely, we also stuffed the bear with conductive steel wool, connected it to the arduino and linked the steel wool to a long metal pin. If the user chose to, they could stab the bear and he would voice his displeasure towards the participant in real time. The spoken feedback the bear offered the user was often humorous. Needless to say, we were happy to witness and share lots of laughter from the nearly 200 visitors who interacted with Voodoo Bear during the winter show.

At the end of the interaction, Voodoo Bear would tweet something to the user based on an aggregate of their experience. For every stab or hug, a point tally was compiled and tracked that was reflected in the tweet. If the user had an overall positive score, he would offer some sort of complement. On the other hand, if the user just stabbed the bear, and received an overall negative score, she would get a nasty note sent to her twitter handle. The anticipation the user had in waiting to see what score and tweet the bear would send was a highlight for many.

8303600193_16ac508463_b
8303600193_16ac508463_b
8304638342_595dbbdb0d_b
8304638342_595dbbdb0d_b
8303588363_0f5737edf9_b
8303588363_0f5737edf9_b
8303588769_434b6ec325_b
8303588769_434b6ec325_b
8304645484_69f6c9ea68_b
8304645484_69f6c9ea68_b
8303596859_32ceb0369f_b
8303596859_32ceb0369f_b
8304652320_746dd7aa2a_b
8304652320_746dd7aa2a_b
8304639914_3e50af4275_b
8304639914_3e50af4275_b
8303605689_73fc3b116f_b
8303605689_73fc3b116f_b
8303604115_96aeab8130_b
8303604115_96aeab8130_b

All photographs courtesy of Sergio Majluf

Twitter_VOODOO_page
Twitter_VOODOO_page

Voodoo Bear Creative Team: Colin Narver Myriam Melki Sergio Majluf Vanessa Joho

The Collective DJ, ITP Winter Show 2012 Documentation

Description The Collective DJ is a project that allows people to create music through touch and collaboration.

Four different sets of two handpads each are mounted on a wall, spaced apart from each other. When two users each put one hand on a handpad and use their other hand to touch one another, they can trigger an audio sample to turn on or off. Since there are four sets of handpads, up to four samples can be triggered at once, allowing multiple groups of people to DJ a track together.

How it works

When two people connected to the pads touch each other, a small amount of electicity rushes through them and they complete a circuit. The circuit completion is detected by a microcontroller, which uses keyboard emulation to send one of four keystrokes to a computer running Ableton Live. In Ableton, the four keys are assigned to different samples of an audio track, and those samples are turned on or off via keypress.

-2
-2
-3
-3
-4
-4
-1
-1

Instillation at the ITP Winter Show, December 2012

The Collective DJ (ITP Winter Show 2012, Rough Cut) from Colin Narver on Vimeo.

Applications Presentation, October 23rd, 2012 (Performance begins at 2:09)

ITP DJ Orchestra (Introductory video and documentation of performance ) from Colin Narver on Vimeo.

Collective DJ Creative Team: Andrew Cerrito, Mary Fe, Colin Narver, and Azure Qian at ITP for the 2012 Winter Show. For more information, contact us here.

Nature of Code Midterm Documentation

Nature of Code Mid Term Documentation from Colin Narver on Vimeo.

For my midterm, I was inspired by Andy Warhol's "Floating Silver Pillows" from the "Regarding Warhol: Sixty Artists, Fifty Years" show at the Met. Using the Toxiclibs library, I created an array list of silver ballon like shapes that I made in illustrator as well as a hand object to hit them. I wanted to recreate the joy children have in knocking around balloons in a small room.

I then incorporated an attraction behavior to the elements and used a system of checking if a balloon has been hit and then marking it with a number. This was based on the ability to determine the distance between my hand object and the ballons. Then, If the marked number exceeds my pre-determined threshold for hits, a negative force is imposed upon the balloon and it appears to loose helium or it's ability to float.

I was inspired and enabled by the Nature of Code examples on attract/repel as well as the mechanics of the simple spring (both utilizing the toxiclibs library).

 

//ballons svg files with a bit of rotation 
//checking to see if another particle is hit would be built in the main program

import toxi.geom.*;
import toxi.physics2d.*;
import toxi.physics2d.behaviors.*;

ArrayList<Particle> particles;

VerletPhysics2D physics;

Particle p1;
Particle p2;

PImage balloon;
PImage background;
PImage hand;

void setup () {
  size (850, 700, P2D);

  balloon = loadImage("silver_pillows.png");
  background = loadImage("warhol_cow.jpeg");
  hand =loadImage("hand.png");
  smooth();

  physics = new VerletPhysics2D ();
  physics.setDrag (0.01);

  particles = new ArrayList<Particle>();
  for (int i = 0; i < 80; i++) {
    particles.add(new Particle(balloon, new Vec2D(random(width), random(height)), -32, -1, 120, 80));
  }

  //physics.addBehavior(new GravityBehavior(new Vec2D(0, -0.05)));
  physics.setWorldBounds(new Rect(0, 0, width, height));

  // Make two particles
  p1 = new Particle(hand, new Vec2D(width/2, height/2.5), 32, -1, 20, 30);
  p2 = new Particle(hand, new Vec2D(width, 180), 32, -10, 70, 100);
  // Lock one in place
  p1.lock();

  // Make a spring connecting both Particles
  VerletSpring2D spring=new VerletSpring2D(p1, p2, 100, 0.005);

  // Anything we make, we have to add into the physics world
  physics.addParticle(p1);
  physics.addParticle(p2);
  physics.addSpring(spring);
}

void draw () {
  background (255); 
  tint (255); 
  imageMode (CORNER);  
  image(background, 0, 0);

  physics.update ();

  // Draw a line between the particles
  stroke(0);
  strokeWeight(2);
  line(p1.x, p1.y, p2.x, p2.y);

  // Display the particles
  p1.display();
  p2.display();

  // Move the second one according to the mouse
  if (mousePressed) {
    p2.lock();
    p2.x = mouseX;
    p2.y = mouseY;
    p2.unlock();
  } 

  for (Particle p: particles) {
    p.display();
    p.checkEdges();
    //p.checkIfHit (p1.x, p1.y, 30); 
    p.checkIfHit (p2.x, p2.y, 35);
//    p.oscillate();
    p.markHits(); 

    }
  }

//spring or sin wave---give it a slight force from being applied by 
//call it in the par

// class Spore extends the class "VerletParticle2D"
class Particle extends VerletParticle2D {

  // variable for helium level, affect the brightness
  //need to apply a force to each on pushing them down
  //checking the hitter particle against the others 

  float theta = 0.0;
  PImage balloon;
  int howManyTimesHit = 0; 
  int widthS;
  int heightS;
  int angle;

  boolean isHit = false;

  Particle (PImage _balloon, Vec2D loc, float len, float strength, int _widthS, int _heightS) {
    super(loc);    //has to be first in order
    balloon = _balloon;
    widthS= _widthS;
    heightS = _heightS;
    physics.addParticle(this);
    physics.addBehavior(new AttractionBehavior(this, len, strength));  //variables for behavior elements
  }

  void display () {
    stroke (0);
    strokeWeight(2);

    imageMode (CENTER); 
    tint (255); 
    image(balloon, x, y, widthS, heightS);
  }

  void checkEdges() {
    if (x> width) {
      x=0;
    } 
    else if (x< 0) {
      x=width;
    }

    if (y> height) {
      y=0;
    } 
    else if (y<0) {
      y=height;
    }
  }

//incorporate oscillation here:
//  void oscillate() {
//    
// float a = map(cos(theta), -1.0, 1.0, -.05,.001);
//// float z = map(sin(theta), -1.0, 1.0, 0,-1.1);
//     theta += 0.002;
//     Vec2D osc = new Vec2D(a,random(-.01,.01));
//      addForce(osc); 
//    
//  }

  void checkIfHit (float handX, float handY, int handRad) {
    float d =  dist (x, y, handX, handY);
//    println(d + " " + isHit);
    float threshold = 30 + handRad;
    if (!isHit &&  d < threshold ) { 
      howManyTimesHit++;
      isHit = true;
    } 
    else if (d > threshold) {
      isHit = false;
    }
  }

  void markHits () {

    fill(255, 244, 0);
    textSize(30);
    text(howManyTimesHit, x, y); 

    float helium = map(howManyTimesHit,0,20,-0.05,0.05);

      Vec2D down = new Vec2D(0,helium);
      addForce(down);

  }
}

Nature of Code (Particle System with wind and gravity)

This week's homework focused on particle systems. I used my hot air ballons with another set of smaller red balloons. Wind and gravity are applied and the red balloons are passed in through the new principle of inheritance that we are learning.

ArrayList<ParticleSystem> systems;
PImage img;

void setup() {
  size(600, 800, P2D);
  systems = new ArrayList<ParticleSystem>();
  img = loadImage("clouds.jpeg");
}

void draw() {
  background(255);
  image(img, 0, 0);

  PVector gravity = new PVector (0, 0.05);

  for (ParticleSystem ps: systems) {
    ps.applyForce(gravity);
    ps.run();
    ps.addParticle();
    if (keyPressed) {
      if (key == 'w') {
        PVector wind = new PVector (0.05, 0);
        ps.applyForce(wind);
      }
    }
  }
}

  void mousePressed() {
    systems.add(new ParticleSystem(1, new PVector(mouseX, mouseY)));
  }

class RedBalloon extends Particle {

  RedBalloon(PShape _s, PVector l) {
    super(_s,l);
  } 

  void display() {
    shape(s, location.x, location.y, 15, 15); //want to just change size of red balloon

  }
}

class Particle {
  PVector location;
  PVector velocity;
  PVector acceleration;
  PVector gravity;
  float lifespan;
  PShape s;

  Particle(PShape _s, PVector l) {
    s =_s;
    acceleration = new PVector(0, 0.05);
    velocity = new PVector(random(-1, 1), random(-2, 0));
    location = l.get();  //making a copy
    lifespan = 155.0;
  }

  void applyForce(PVector force) {
   acceleration.add(force); 
  }

  void run() {
    update();
    display();
  }

  // Method to update location
  void update() {
    velocity.add(acceleration);
    location.add(velocity);
    acceleration.mult(0);  //if acceleration is steady on, velocity will continue to ramp up  and location will fly 
    lifespan -= 2.0;
  }

  // Method to display
  void display() {
    stroke(0, lifespan);
    strokeWeight(2);
    fill(100, lifespan);
    shape(s, location.x, location.y, 50, 50);  //shape(s, location.x, location.y, 125, 125);
  }

  // Is the particle still useful?
  boolean isDead() {
    if (lifespan < 0.0) {
      return true;
    } 
    else {
      return false;
    }
  }
}

class ParticleSystem {

  PShape balloon = loadShape("balloon.svg");
  PShape red_balloon = loadShape("red_balloon.svg");

  ArrayList<Particle> particles;    // An arraylist for all the particles
  PVector origin;        // An origin point for where particles are birthed

  ParticleSystem(int num, PVector v) {
    particles = new ArrayList<Particle>();   // Initialize the arraylist
    origin = v.get();                        // Store the origin point
    for (int i = 0; i < num; i++) {
      Particle p = new Particle (balloon, origin);
      particles.add(p);    // Add "num" amount of particles to the arraylist
    }
  }

  void run() {
    for (int i = particles.size()-1; i >= 0; i--) {
      Particle p = particles.get(i);
      p.run();
      if (p.isDead()) {
        particles.remove(i);
      }
    }
  }

  void addParticle() {
    float r = random(1);
    if (r < 0.7) {
      particles.add(new Particle(balloon, origin));
    }
    else {
      particles.add(new RedBalloon(red_balloon, origin));
    }
  }

  void applyForce(PVector force) { 
    for (Particle p : particles) {
     p.applyForce(force);
    } 
  }

    // A method to test if the particle system still has particles
    boolean dead() {
      if (particles.isEmpty()) {
        return true;
      } 
      else {
        return false;
      }
    }
  }

 

Nature of Code (HW3)

Hot Air Ballons (Nature of Code) from Colin Narver on Vimeo.

In this homework assignment, I created an array of hot air ballons and brought in the additional force of wind as well as simple harmonic motion (via oscillation).

Mover[] movers;
PImage img;

//PVector current;

void setup() {
  size(600, 800, P2D);
  PShape balloon = loadShape("balloon.svg");
  movers = new Mover[10];
  for (int i = 0; i < movers.length; i++) {
    movers[i] = new Mover(balloon, random(width), random(height));
  }

  img = loadImage("clouds.jpeg");
}

void draw() {
  background(255);
  image(img, 0, 0);

  PVector gravity= new PVector (0, .015);
  PVector antiGravity = new PVector (0, -.015);
  PVector wind= new PVector (random(-0.1, 0.1), 0);

  for (int i= 0; i<movers.length; i++) {

    if (mousePressed) {            //click the mouse to use antiGravity 
      println("MOUSE PRESSED");
      movers[i].applyForce(antiGravity);
    }
    if (keyPressed) {      //press the key "g" to use gravity
      if (key == 'g') {
        movers[i].applyForce(gravity);
      }
    }

    movers[i].applyForce(wind);

    PVector r = PVector.random2D();
    r.mult(0.05);
    movers[i].applyForce(r);

    movers[i].run();
  }
}

class Mover {

  PVector location;
  PVector velocity; //speed
  PVector acceleration; //the rate at which that speed is being applied 
  PVector gravity;
  PShape s;  

  float x;
  float y;

  float angle = 0;
  float aVelocity = random (0.01, 0.05);

  //constructor
  Mover(PShape _s, float _x, float _y) {
    s = _s;
    x = _x;
    y = _y;

    location= new PVector (_x, _y);
    velocity= new PVector (0, 0);      //ball starts from a stop
    acceleration= new PVector (0, 0);  //ball accelerates downward as Y values increases perpetually
    //antiGravity= new PVector (0, -.01);
  }

  //Functions
  void run() {
  update();
  display();
  checkEdges();

  println("Acc: " + acceleration);
  println("Vel: " + velocity);
  println("Loc: " + location);
}

void update() {
    angle += aVelocity;
    velocity.add(acceleration);
    velocity.limit(5);
    location.add(velocity);  //velocity changes by acceleration
    acceleration.mult(0);  //if acceleration is steady on, velocity will continue to ramp up  and location will fly 
  }

  void display() {
//    stroke(255, 0, 0);
//    fill(0, 0, 255);
    float r = map(cos(angle),-1,1,100,190);
    shape(s, location.x, location.y, r, 125);  //shape(s, location.x, location.y, 125, 125);
  }

  void applyForce(PVector forceBeingApplied) {    //the constructor of the function 
    acceleration.add(forceBeingApplied);
  }

  void checkEdges() {
    float buffer = 150;
    if (location.x> width+buffer) {
      location.x=-buffer;
    } 
    else if (location.x< -buffer) {
      location.x=width+buffer;
    }

    if (location.y> height+buffer) {
      location.y=-buffer;
    } 
    else if (location.y<-buffer) {
      location.y=height+buffer;
    }
  }
}

Nature of Code HW#2 (Vectors and Forces)

NC 2 HW from Colin Narver on Vimeo.

This week's nature of code HW has me playing with the forces of gravity and antigravity simulated through a hot air balloon. Gravity is imposed upon the balloon with the key press of G and a mouse click imposes antigravity.  This may or may not be the environment I use to experiment with more forces in future projects. I like the idea of potentially having several hot air ballons being blown around by gusts of wind and each balloon having to navigate its course through large groups of flocking birds.

//Colin Narver
//Nature of Code
//HW#2

Mover mover;

//PVector current;

void setup() {
  size(600, 800, P2D);
  mover = new Mover();

}

void draw() {
  background(255);

  PVector gravity= new PVector (0, .015);
  PVector antiGravity = new PVector (0, -.015);

  if (mousePressed) {            //click the mouse to use antiGravity 
    println("MOUSE PRESSED");
    mover.applyForce(antiGravity);
  }
  if (keyPressed) {      //press the key "g" to use gravity
    if (key == 'g') {
      mover.applyForce(gravity);
    }
  }

  mover.run();
}

class Mover {

  PVector location;
  PVector velocity; //speed
  PVector acceleration; //the rate at which that speed is being applied 
  PVector gravity;
  PShape s;  
  PImage img;

  //constructor
  Mover() {
      s = loadShape("balloon.svg");
      img = loadImage("clouds.jpeg");

    location= new PVector (width/2-50, height/2-50);
    velocity= new PVector (0, 0);      //ball starts from a stop
    acceleration= new PVector (0, 0);  //ball accelerates downward as Y values increases perpetually
    //antiGravity= new PVector (0, -.01);
  }

  //Functions
  void run() {
  update();
  display();
  checkEdges();

  println("Acc: " + acceleration);
  println("Vel: " + velocity);
  println("Loc: " + location);
}

void update() {

    velocity.add(acceleration);
    location.add(velocity);  //velocity changes by acceleration
    acceleration.mult(0);  //if acceleration is steady on, velocity will continue to ramp up  and location will fly 

  }

  void display() {
//    stroke(255, 0, 0);
//    fill(0, 0, 255);
    image(img, 0,0);     //clouds
    shape(s, location.x, location.y, 125, 125);     //hot air balloon
  }

  void applyForce(PVector forceBeingApplied) {    //the constructor of the function 
    acceleration.add(forceBeingApplied);
  }

  void checkEdges() {
    if (location.x> width) {
      location.x=0;
    } 
    else if (location.x< 0) {
      location.x=width;
    }

    if (location.y> height) {
      location.y=0;
    } 
    else if (location.y<0) {
      location.y=height;
    }
  }
}

Random Walker with Motion (1st HW Assignment)

Nature of Code Random Walker from Colin Narver on Vimeo.

 

//Colin Narver
//HW Assignment 1
//Create a random walker with dynamic probabilities. 
//For example, can you give it a 50% chance of moving in the direction of the mouse?

Walker w;
PShape s;

void setup() {
  size(600,600);
  s = loadShape("bee.svg");
  // Create a walker object
  w = new Walker();

}

void draw() {

  if (w.is50==true){    //blink red at each frome that is50 is true, else fill white
   fill(255,0,0); 
  } else {
    fill(255);
  }
  ellipse(150,150,20,20);
  frameRate(15);
  w.step();
  w.render();
}

class Walker {
  int x, y;
  boolean is50;

  Walker() {        
    //just having it start in the middle, not giving it more assignments in constructor
    //so those arguments are not put into the parenthesis 
    x = width/2;
    y = height/2;
  }

  void render() {
    stroke(0);
    shape(s, x, y, 20,20);  //loaded svg file of fly 
  }

  // Randomly move, with a 50% chance of moving in the direction of the mouse
  void step() {

    float r = random(1);

    //With probability at 50%---every frame is a roll of the dice

    if (r < 0.5) {  
      is50=true;

      if (mouseX - x>0) {
        x+=10;
      }
      else {
        x-=10;
      }

      if(mouseY - y>0){
       y+=10;
      }
      else {
        y-=10;
      }
      }

      else {
        is50=false;

       if(mouseX -x<0) {
        x+=10;
       } 
        else {
         x-=10; 
        }

        if(mouseY-y<0){
         y+=10; 
        }
        else{
         y-=10; 
        }
      }

    x = constrain(x, 0, width-1);
    y = constrain(y, 0, height-1);
  }
}

Activity Analysis & Review of Accessibility Features

 Task Analysis : Cleaning and waxing winter boots

  1. Get shoebox from under bed
  2. Pick up jar of wax
  3. Pick up two rags from shoebox
  4. Wet one rag in sink
  5. Pick up each shoe and clean with damp rag
  6. Pick up second dry rag and twist into a point
  7. Insert rag into wax jar
  8. Polish wax into each boot

In order to get the shoebox from under the bed, one first needs to get on his hands and knees and extend his arm long enough to reach the box under the bed. Once the box is brought out from under the bed, one needs to open the box and pick up the hockey puck shaped jar of wax and place it outside the box. Additionally, two rags from inside the box need to be picked up as well. The first rag needs to be rinsed with water and or soap depending on the current condition of the boots. Each rag that is used needs to be able to be clumped up in one’s hand, requiring slight strength and dexterity. Furthermore, a moderate amount of pressure/force is required to mold them into a point and press them into the boot. One needs to be able to hold each shoe in one’s hands/lap while scrubbing off the existing dirt with the damp rag. I typically find it easier to perform this task from a kneeling position so that you have easier access to your tools on the floor. Once the boots are clean, the hardest maneuver by far is opening the jar of wax. It is difficult for me to do with full strength/acuity. One must slide his thumbnail under the edge of the lid, pull upward and twist off with the other hand. This action requires the most bilateral arm, hand and wrist strength of the entire task. Finally, once the jar is opened, one must pick up the other dry rag and form a pinched point, dip this point into the wax and then rub this applied covering into the leather of each boot. In order to ensure that the wax is adequately rubbed in, one must have the endurance required for working the wax into the boot for at least 3-5 minutes.

After identifying the functionality required to execute this particular chore (in my desired approach), I recognize that there may be alternatives for the semi to significantly impaired individual to achieve the same task. For one, the boots could be positioned on the table on top of a towel or blanket. This would eliminate the need for kneeling down. Furthermore, the wax jar, which is infinitely frustrating, could be opened by a friend and then transferred to a tupperware container or some receptacle that enables much easier access. Additional improvements to my approach could be added with further investigation into the specific needs of the individual user.

Review of Accessibility Features:

The accessibility features of my Macbook are striking in contrast to those of my iPad. For one, the learning curve for my laptop computer is significantly steeper. For visually impaired users, the voice over function on my Mac is a thirty page tutorial where navigation is contingent upon mashing multiple buttons. It feels unintuitive and complex. Perhaps this impression is based on the bias of comfort and expectability that comes with years of laptop/desktop computer use. Nonetheless, when I compare the voice over capabilities for the iPad, I am given clear and concise instruction for my movements. I was able to learn the voice over practices in a matter of minutes on the iPad. Perhaps again this is due to a more intuitive system of interaction that is laden in tablet computing today. In some ways, the iPad feels like it natively was designed with disabled users in mind. The organic swiping of fingers feels much less cumbersome than the precise button mashing that results with my mac. The voice over flow also seems less hurried and jumbled, perhaps because the iPad is distinctly limited in functionality compared to my mac. Nonetheless, the simple everyday tasks that anyone (disabled or not) would wish to do on a computer are far easier to accomplish on the iPad.