Saturday, April 29, 2017

Final Code

#include <Adafruit_NeoPixel.h>
#ifdef __AVR__
#include <avr/power.h>
#endif

// Which pin on the Arduino is connected to the NeoPixels?
#define PIN            2

// How many NeoPixels are attached to the Arduino?
#define NUMPIXELS      7



// When we setup the NeoPixel library, we tell it how many pixels, and which pin to use to send signals.
// Note that for older NeoPixel strips you might need to change the third parameter--see the strandtest
// example for more information on possible values.
Adafruit_NeoPixel pixels = Adafruit_NeoPixel(NUMPIXELS, PIN, NEO_GRB + NEO_KHZ800);

int soundInput = 0;
float value;
int delayval = 1;

void setup()
{
  Serial.begin(9600);
  pinMode (soundInput, INPUT);
  pixels.begin(); // This initializes the NeoPixel library.
  pixels.show();
}


void loop() {
  float value = analogRead(soundInput);

  // Adjust this value to the level of your voice or the sound you want the sensor to pick up  
  if(value >= 518.00) {
    for(int i=0;i<NUMPIXELS;i++)
 
    // pixels.Color takes RGB values, from 0,0,0 up to 255,255,255
    {
      pixels.setPixelColor(0, pixels.Color(50,0,0)); // Moderately bright red color.
      pixels.setPixelColor(1, pixels.Color(50,0,0));
      pixels.setPixelColor(2, pixels.Color(50,0,0));
      pixels.setPixelColor(3, pixels.Color(50,0,0));
      pixels.setPixelColor(4, pixels.Color(50,0,0));
      pixels.setPixelColor(5, pixels.Color(50,0,0));
      pixels.setPixelColor(6, pixels.Color(50,0,0));

      pixels.show(); // This sends the updated pixel color to the hardware.

      //You can adjust this delay to determine how long the light stays on, the less the delay, the more the light will change with the sound
    delay(100);
    }}else{
     
     
      {
      //pixels.setPixelColor(i, low);
      pixels.setPixelColor(0, pixels.Color(0,0,0)); // Moderately bright red color.
      pixels.setPixelColor(1, pixels.Color(0,0,0));
      pixels.setPixelColor(2, pixels.Color(0,0,0));
      pixels.setPixelColor(3, pixels.Color(0,0,0));
      pixels.setPixelColor(4, pixels.Color(0,0,0));
      pixels.setPixelColor(5, pixels.Color(0,0,0));
      pixels.setPixelColor(6, pixels.Color(0,0,0));
  pixels.show(); // This sends the updated pixel color to the hardware.

  //Same concept for adjusting the length the light stays on and how much it fluxes
  delay(30);
}

Serial.println(value);

}
}

Wednesday, April 12, 2017

NeoPixel Test

The neopixel works great and the code has a variety, so now to figure out how to combine the codes and set up a contraption for wearing the device.



Wearable Parts

So far I have a flexible goose-neck, fabric and Velcro for attachment of the arduino and battery in the back of the shirt, and all i need is further attachments for the neopixel and for the sound sensor.

Tuesday, April 11, 2017

Cyborgs Ch. 4 & 5

Meghan:

“My body is wherever there is something to be done.” —Maurice Merleau-Ponty
“Distance, is what there is no action at.”  

I believe it is also about interaction. Technology allows our minds to span the distance that our bodies cannot and allows interaction to occur via computerized extensions of ourselves.

Our location as well as our body is a construct: this time, one formed by our implicit awareness of our current set of potentials for action, social engagement, and intervention

Telerobotics and telecommunication allows our bodies to no longer be the only extension of our brain. Our bodies are the hardware to the operating system within our gray matter that has a task to be accomplished. Through technology we can be wherever there is work to be done and not have to physically be there.

The best teleoperator systems, after all, provide rich capacities of finely directed action and intervention, and a wide spectrum of sensory feedback (e.g., force feedback coordinated with visual feedback). This rich two-way energetic exchange is surely just the kind of link that might allow the distant equipment to become transparent in use, whereas issuing high-level commands, with merely visual feedback, to a distant robot seems less likely to generate any real shift in perspective.

The practical reasons for moving toward telerobotics are obvious. It is easier for the operator to issue only high-level commands, and this may be essential when time-delays are critical and bandwidth limited
Time and space are no longer relevant to where our presence can be felt or where we can sense ourselves at.




The plasticity of our brains and their ability to compensate for changes because of pre programmed cues and the combination of two neurons firing at the same time making stronger connections than neurons firing alone
The links between our capacities for action and our perceptual experiences are extraordinarily deep and potent. In a famous series of psychological experiments, human subjects were fitted with special glasses whose lenses turned the visual input upside down. At first, as you would expect, the subjects saw an upside-down world, but after a period of sustained use, the visual world began to “flip back over.” After a few days, the subjects reported that their visual experience was back to normal. Remove the glasses, however, and the world now looks upside down (for a while, until re-adaptation occurs). Most interesting of all, these kinds of perceptual adaptations are highly action-dependent. They are primarily driven by the combination of the visual inputs and the subject’s experiences of trying to move and act in the world (and hence, crucially, by feedback coming through various motor and locomotion systems). Thus a subject fitted with the lenses, but simply pushed around in a wheelchair, does not show the adaptation, while one who walks along a complex trail does. 11 So fluent are our perceptual systems at making these motor-loop-dependent adaptations that it is even possible to adapt to both the presence and the absence of the lenses. By wearing the goggles intermittently, while acting in the world, you can train your visual system to cope with both kinds of input (right way up and upside down). This coping is, remarkably, quite seamless. The instant you don or remove the goggles, your visual system flips into one of the two “settings.” The scene before your eyes looks unchanged to you, WHERE ARE WE? 95 nothing seems to flip or alter; ask an untrained friend to try it, and she will immediately flounder in the face of the upside-down world!

The autonomic nervous system is much like telerobotics that allow robots to make decisions based on their master programming

We are much more like computers than what we think. Where we are is only limited by where our mental extensions can take us and how strong the infustructure is

In general, then, the sense of extension, alteration, and distal presence arises as a result of close, ongoing correlations between neural commands, motor actions, and (usually multisensory) inputs. Simple telemanipulation and teleloperator systems afford this kind of dense, real-time correlation. The payoff is a compelling sense of bodily augmentation and extension, a sense of genuine, (if modest) telepresence. The intimate web of closely correlated signals and responses necessary for such rarified reinvention of the body is, however, quite fragile and easily disrupted. The most important kind of disruption is temporal: if there is a noticeable time lag between issuing the command and receiving the sensory feedback, or (worse still) if the time lag is variable due to the traffic on phone lines, for instance, the illusion is shattered. This is what happens, then, as applications grow in complexity, and distance increases.

The next few pages begin to connect this section to the previous chapters and the idea that the illusion of telepresence is much the same as the illusion of our human mind and body. They all rely on connections



Sam:

The way the brain perceives something is not always true. Our senses tell us a lie and we do not even realize it. Proprioception is an important sense in our bodies it allows a flow of our motor skills. If there is something blocking this we will see a more rapid disheveled reaction.

This is part of our neural system that is learning responses.
The brain constructs its own virtual reality because of the delay of the transmitting and reception systems. This same skill is used in industrial control systems

In teledildonics the use of machines for pleasure has yet to achieve its goal. Emotionion is the main concept, in order for that to happen there needs to human interaction.

An additional point in this reading is hour sensors can transmit what one person is doing/ feeling to another location. In the chapter a glove was used with many sensors that when a person moved their hand the movement was relayed to a distance facility where the movement was then seen.

Many additions to the body have been created. One example is a third arm that an artist uses. The artist doesn't always use the third arm but says that after many years of use he can control the arm smoothly and efficiently.



Kassidy:

At the start of my section, we begin with referencing back to Nicolelis’s neurological wiring used in order to control a separate robotic arm 600 miles away, however this required 96 wires to be implanted into the frontal cortex, which really isn't a logical solution. So we go into a new solution from Stelarc, in which he uses muscles in his legs and abdomen to control a robotic arm attached to him. This seems a more logical approach due to it becoming a transparent in use, creating a fluent motion for the human brain to adapt to using, such as one of a person who uses a video game controller to control a player. Colleagues from the University of Illinois, Northwestern University, and the University of Genoa worked together to use the brainstem and a section of the spinal cord of a lamprey (aquatic vertebrae) to be used to control a small Khepera robot. They kept the lamprey tissue alive and used different effects to see how it would naturally respond to sensory stimuli and whether or not that was shown in the movement of the brainstem controlled Khepera robot. A slightly different animal-machine hybrid was a half-virtual , half-biological animal in which real neural tissue was used on a glass and electrode sheet to control an animal in a virtual environment.
Different experiments and applications of using technology with our bodies was done by a group of researchers in Germany that put external electrodes on a paralyzed patient in order to receive neurological signal to control a cursor on a computer, which turned out to be quite difficult for the test subject. Another group of researchers went further to implant two glass cones into the motor cortex of a paralyzed subject, in which they are connected through neurons growing and attaching to small electrodes inside. This turned out to be a very effect method and became second nature to the patient after going through tests to figure out the correct signals in order to properly learn to control the cursor movements. Theses signals include the will to move a body part, in which it processes before it moves the body part or fails to in the paralyzed subject. This kind of technology definitely has the potential for other uses and applications over time.
Using technology to allow the deaf to hear was discussed in chapter one, but this chapter, scientists are using technology to allow the blind to see. An experiment called the Dobelle Eye had a mounted small TV camera and ultrasonic distance sensor on sunglasses that connected to a portable computer worn on the hip. The computer transmits visual and distance information to a sixty-eight-electrode array implanted on the surface of the visual cortex. This allowed him to see to an extent and was also able to be hooked up to see commercial TV, the internet, and text-manipulating computer program.
Defense agencies are looking into safety applications and control for pilots in which it would allow pilots to operate control systems through direct neural activity or by gaze-direction; the safety application would be through clothing to monitor the pilot’s physical state in case of a medical emergency and be able to switch to autopilot. The lesson, once again, is that our brains are amazingly adept at learning to exploit new types and channels of input. Also, that we are moving toward a world of wired people and wire-less radio-linked gadgets.
Future applications have been experimented with to try and connect two humans through their brains in order to allow each partial control of the other body. Possible problems seem to be the chance of fighting for control instead of harmony between the pair.
The next subject goes into the biotechnology identity and control.



Ryan:

My section of reading was mainly about how the human body image can be connected with non biological items such as arms, cars, tables and so on. This happens because the human body has a mind set that tells you what your body image is and it can be changed rather easily. For example imagine that a doctor is doing micro surgery on a patient and that doctor is using a remote control arm to proceed with the surgery. The doctor's mind will be make those controls for the arm a part of his body image and it will feel as though the arms of the machine is an extension of his or her body.
The different ways that the body can be manipulated is very large. Manipulating the human bodies self image is not the only thing that can be changed about the human mind. The brain is like a foundation for the body to maintain a firm grounding for what is around the individual. The brain monitors what is going on around the individual but only picks up on the important details that are within our high resolution sight (Non Peripheries). Due to that fact the human mind can be tricked very easily into believing in sleight of hand and other tricks. Although there is more than just visual peripheries such as body sensing which is like that feeling of something near yourself when something is about to touch you. It takes to much time for that to be done effectively so the signals that are received are converted into a smooth motion. The reason this happens is due to a special neural circuitry motor emulator. This neural circuit takes a copy of the motor signal and feeds it into a neural system which has learned the typical responses from body peripheries and it acts like a local scale circuit to the brain.