Tuesday, 8 December 2015

Design Board Process

The design of our design board has had to evolve throughout the project. We started by producing mood boards which would represent elements of our project such as direction, possible technology, related work, etc. Below is the first of such mood boards:




Below is a revised mood board. The colour scheme is more similar to what was used in the final design board, and the purpose of the imagery is somewhat more evident. We also greatly reduced the text content, as the final design board would need to communicate primarily through its design, and we wanted to reflect this. The content such as imagery and background was handled by Shane, and the colors/text by me. We tried to categorize the various elements of our project in order to visualize them in a simple, understandable manner.




Below is a picture of the design board in progress. The layout and design is much more simplistic/minimalist, as we wanted the purpose of the project to be obvious to the reader by providing basic images whose meaning is immediately apparent, and not obscured by unnecessary details.




Below is the finished design board. Shane and I worked closely to populate and structure this final board. We continued to use minimalistic and straightforward imagery, but we chose to also include some text. We wanted the important text to be striking, such as the tagline "Motion, Mood, Participation". We did this by utilizing size, colour, and placement on the page. We opted for a dark background with a subtle image, as we felt it pronounced the orange colours better, and gave more contrast to the imagery.






Friday, 4 December 2015

Public Displays

Once we had an outline of what we wanted to do for our new direction i started to look into different types of public displays. How they work, what is effective and not affective to try and gain a better understanding of what we will need to do if we were to implement one of our own.

One of the biggest issues with modern cities is that people don't seem to interact anymore. whenever people are walking around a city or using public transport it always seems to be a matter of keeping your head down and yourself to yourself. Even eye contact is now a taboo, that awkward look away people do when you make contact with them in the street. One of the main causes of this seems to be the explosion of smart phones and other such devices that fit in your pocket and give you a good excuse to not have to even look at someone you might meet in the street. Despite the fact that this causes people in cities to become detached from one another the digital age is here to stay so the issue is not going to go away any time soon. People don't want to interact with each other so how do we get people to interact with a display, and even further interact with a display with someone else.

I looked at interactive displays that people had done before:


1)


This billboard appeared in Dublin for Honda, where customers could “start” the car by texting to an SMS shortcode and could also download information by Bluetooth. The campaign was created by GT Media and JC Decaux using technology provided by PĂșca.


2)



Nike demonstrates that even in philanthropy, it stands for athletes. In an interactive billboard (by BBDO) publicizing a charity 10k run in Argentina, the athletics powerhouse invites passers-by to have a run on a treadmill that logs a communal kilometer count. For each kilometer run, Nike donates a set amount to UNICEF, urging that Training for the 10k doesn't only help you. For each kilometer run, you will be helping UNICEF.


3)


As pedestrians walk past the wall, infrared sensors will lock on to the person closest to the wall, who will then be able to control a projected slider button at the bottom of the wall. As the selected pedestrian continues walking and moves the slider along, the wall will start displaying colorful animation and playing music, effects that will grow or recede at the pace that the person advances or retreats.

The main thing that stands out from looking at different types of public display is that they need to 1, draw the attention of people from a distance, having something that people wont notice will never work as they will have no insentive to you it if they barely notice it. The second thing is that they all use different technologies and have different functions to that of the standerd devices that people have now grown acousmted to, the most successful types f displays now show people somthng that have not seen before. To make the display effective we should have the users know before they use tit that this is going to be a unique and new experience.


During my research a came across some good information about how to properly get people in approach and interact with a display. The information provide will be very useful for future development.


Public displays need to grab the attention of passers-by, motivate passers-by to interact with them, and deal with the issues of interaction in the public. In contrast to many other computing technologies, interaction with public displays does not start with the interaction itself. Instead,
the audience is initially simply passing by, without any intention for interaction.



 People pass through different phases, where a threshold must be overcome for people to pass from one phase to the next. For each pair of phases, a conversion rate can be calculated of how many people are observed to pass from one phase to the next, and different displays can be compared by these rates. In the first phase, people are merely passing by. In the second phase, they are looking at the display, or reacting to it, e.g. by smiling or turning their head. Subtle interaction is only available when users can interact with the display through gestures or movement, and occurs, e.g., when
they wave a hand to see what effect this causes on the display. Direct interaction occurs when users engage with a display in more depth, often positioning themselves in the center in front of it. People may engage with a display multiple times, either when multiple displays are available or if they walk away and come back after a break. Finally, people can take follow-up actions, like taking a photo of themselves or others in front of the display.

The full study can be found here:
http://wdirect.pervasiveadvertising.org/pdf/MM10MuellerReqDesignSpace.pdf

Sunday, 29 November 2015

Using Technology to Incite Social Interaction and Affect Mood

Through our technical exploration and interest in public collaboration via music and visuals, we have become more interested in the social aspect of our project;  using technology to instigate social interaction, influence mood, and allow us to study the effects under the conditions of different social spaces.

Full Body Gestures
The basic concept for this would be to place an interface in public spaces, such as pedestrian streets, bus stops, even bathrooms; and use techonology which recognises gestures to create music and visuals based on these gestures.
Hand Gesture Examples


Various gestures would launch different audio samples, such as a drum beat, or bass line, along with visuals which  would play as long as a gesture is held.  The more participants involved, the more music would play, and more visuals created.  For example, one participant may launch drum samples with blue visuals; while the other launches bass samples with yellow visuals.

Through participation, social interaction is instigated, with more participants adding to the whole piece.

To solidify this social interaction, the piece would aim to affect the participants' mood , using specific colours,in a positive manner, and create an association of positive mood, with interaction in public space.  Colour and its affectation of human mood, productivity and social interactivity have been studied in depth, and has been proven to have a large effect on us, with colours such as red making us more likely to be more aggressive, or certain shades of green increasing productivity.

Colour chart and emotions associated with them.


The actual installation itself would use motion tracking technology via the use of readily available hardware such as a standard webcam, (or microsoft kinect, which would allow for a wider variety of gesture tracking). Motion tracking software such as Motion Studio, and Max for Ableton which would allow us to convert gestures into MIDI controls which would in turn launch visuals and audio based on these gestures.

 Visuals displayed via screens or projections, with most of the 'work' being done by the software.  This means that the installation would not require large amount of space and could be placed in a multitude of locations.

Wednesday, 25 November 2015

Sense of Community / Location

It seems the bigger the city, the less likely people are to engage with their surroundings and other people around them. This is something I'm sure most people are aware of, but the question is why. This is what many cities and sociologists are trying to figure out, and subsequently reduce. Thankfully we don't experience this phenomenon to a huge extent growing up in Ireland, but in major cities across the world there exists a complete lack of community. As a result of this people are reluctant to connect and cooperate with one another, and this can have a knock on effect in many areas such a crime or racism. A lack of communication with one another results in a lack of understanding and empathy as people can become indifferent to others around them. This is why it is massively important to try create a greater group mentality, rather than thinking indifferently as individuals. 
Projects like the one we are proposing could help promote engagement with others, and in turn a sense of community and association. It could not only have an impact on the mood and attitude of an individual, but have a domino effect on many in an area. This makes the project very interesting from the point of view of mass psychology and sociological behavior.





While developing the prototype and finalising the idea/prototype is the ultimate priority at the moment, it is also crutial we select the correct place for the artifact shall we decide to display it in a public place. 

It would be pointless to develop a great idea designed for people to interact with, and have no on interact with it. Recently I've been reading about how people interact in public spaces and cities, and how people's surroundings can influence their behaviour, and as a result how they affect their surroundings. I've also been observing different public places in Cork City, seeing were would be an appropriate location to set up.
The area need to be away from busy streets and the hustle and bustle of large groups, as people will take no notice, however it also can't in an area so quiet no one will notice it.
Ideally it would be in an area where people are relaxed enough to engage with something like this, while also having the time to spontaneously interact.
That's why an ideal location would be a park or park entrance. This is where people go in a time of leisure to relax, and usually see a lot of footfall, especially in the city centre.

From what observed around the city, areas like Bishop Lucy Park, Fitzgeralds Park, and the benches near the memorial on the South Mall are ideal prospects. This is not only due to the amount of people which pass through them on a daily basis, but because of the relaxed and friendly atmosphere they naturally posses. They are full of people socialising and engaging with one another, and would therefore suit perfectly for the kind of setting to carry out our project.


People relaxing in Bishop Lucy Park



Linked below are a number of interesting articles I've read on The effect of Location on interaction, fostering interaction in cities and how to create public spaces to encourage people to interact. 



Creating public spaces which encourage people to interact

Below is a basic representation of the goal of creating a project like this. The effect it may have on the community, the data that could be collected from it, and the artistic benefits in terms of people coming together and creating music and imagery. 







Monday, 23 November 2015

Visual Display

I began to look into different types of visual displays that we could use in the project. Because the music will be one of the main features of the project i began of looking into ways of triggering visuals with music and that will interact with it.



MAX 7

In the process of looking for ways to make live visuals that will run to music i found Max 7.
Max is a visual programming language for music and multimedia developed and maintained by San Francisco-based software company Cycling '74. During its 20-year history, it has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations.
The Max program is modular. Most routines exist as shared libraries. An application programming interface (API) allows third-party development of new routines. Thus, Max has a large user base of programmers unaffiliated with Cycling '74 who enhance the software with commercial and non-commercial extensions to the program. Because of its extensible design and graphical user interface (GUI), which represents the program structure and the user interface as presented to the user simultaneously, Max has been described as the lingua franca for developing interactive music performance software.
Max 7 will let you create your own visuals and plug ins for any compatible DAWs. These could be used in tandem with clips launched for Ableton live and also with the gesture capturing technology

                                  Audio-Visual Feedback System by Max/MSP


Features:
  • Full support for MIDI devices and modern audio hardware.
  • Limitless audio options including basic DSP building blocks, VST, Audio Units, and Max for Live devices.
  • Flexible support for multi-channel audio.
  • Realtime input from webcams, digitizers, and built-in hardware.
  • Serial and HID support for a wide variety of electronic prototyping boards and controllers.
  • Interactive OpenGL graphics and GLSL shaders, including realtime shadows.
  • Support for multiple displays and tools for live projection.
  • Efficient realtime HD playback and hardware-accelerated image processing.
  • Transcoding and interaction between audio, video, graphics, and control data.

Max for Live
After researching what Max 7 was capable of and how we could use it i came across Max for Live. Max for live is version of Max 7 that is specifically  made for programming within Ableton. The advantage of this is that they are plenty of programs already made for Max for Live that can be downloaded.







Max for Live comes with a great collection of instruments, effects, and tools. And there’s even more available from the dedicated community of artists and builders who share their Max for Live creations.
Every Max for Live device is ready to use in your own music, but can also be edited and customized to suit your specific needs. And because Max for Live is part of Ableton Suite, it’s perfectly integrated into the familiar Live workflow.
Max for Live lets you build your own devices for use in Live. Create custom synthesizers, samplers, sequencers, audio effects, and much more. Max for Live also allows you to build devices that modify Live itself, including the properties of tracks, clips, and native Live devices.
Every Max for Live device includes an Edit button, allowing you to look at (or modify) how the device was made. And Max for Live comes with a collection of Live’s native interface elements, so you can build devices that look and feel just like Live.
Max 7 introduces a number of new features that make patching easier and more powerful. And all of them are available in Max for Live.
In Max 7, the interface has been redesigned, allowing for easier patching. Audio quality has been improved with a 64-bit audio engine and improved filter design tools. Additional features include enhanced OpenGL support, including a new physics engine and support for Gen, an add-on that compiles patches into code for improved performance.
On maxforlive.com users contribute to an online library of free Max for Live Devices that you can use for no charge by sharing .amxd files or links (known as "references" on maxforlive.com) to download or purchase them elsewhere.







Monday, 16 November 2015

Using Text Data to Manipulate Images & Sound (pt. 2)

Following on from my previous experiment, my next idea was to use similar text-data to manipulate sound, as this would tie into our data-controlled MIDI concept.

What I created was a page which used the same functionality as the last one (creating coloured images which change depending on data), but accompanied by sound. Three 'notes' will appear onscreen at any one time. These notes are randomly selected from a range of numbers in a separate text document. Each of these numbers correlates to a sound file in another folder; the higher the number, the higher the sound. The sounds I used are royalty-free recordings of piano keys. These three notes will play at once when the page is loaded, forming a sort of chord, though due to the random nature of the note selection process this chord is usually rather discordant.

This project can be found here: http://dindins.web44.net/data%20stuff/test2.php

Initially the site would automatically refresh every few seconds so that the sounds would randomise continuously, but I removed this feature because I felt it was unnecessary and annoying. The page can be refreshed manually for the same result and a new 'chord' each time.

Below is a screenshot of the site in action:



As you can see, as with the previous test, the values extracted from the text file correlate to colour, as well as sound. The higher the number in the note's div tag, the more saturated/brighter the green. 

The next image will briefly explain how this was all achieved. The first section of code seen uses the same concatination approach as last time to style the colours of the div elements according to their note value. The second section shows how the audio files were launched. I used html5's audio tag functionality to add the sounds to the page. A similar method is used to the styling, where a random number is taken from the previously declared array, and related to a sound file with the same number in its filename.




This test suffers from the same 'undefined' issue that I experienced in the last one, but was otherwise quite successful. This test bridges the gap somewhat between our concepts of data gathering data and using music as an output or feature, as it utilises both of these ideas.




Sunday, 15 November 2015

Positive body language / Gauging mood


There many ways of accessing the mood of people without the use of language. Often the posture and movement of a person is all you need to tell wether they're feeling on top of the world or down in the dumps. 
With this project we intend to use software to essentially measure the type of mood a person is in, by taking factors like stance and arm position into consideration. We then want to influence this for the better, and see what kind of impact we can have on the individual and a city as a whole. 

For us, we can easily judge how someones feeling just by body language, it's human nature. However it is a little more complex when it comes to commuters as they have no intuition when it comes to human interaction. Therefore we have to program me it with certain patterns and shapes that someone in a particular mood would make.
For example a person with positive body language, in a good mood, might make a power stance with their arms in the air out wide, almost in celebration.



Mick Jagger did this……. a lot.


This kind of body language could be rewarded by positive imagery or audio, or alternatively, produce this kind of pleasant visual and acoustic art that would be a result of the persons participation. This effect could be huge if it could somehow affect the persons body language after they had interacted with the project, as they may carry that into the rest of their day. If everyone walked away from it with a more friendly and open energy it could influence others around them. 
A kind of domino effect like this could have a massively positive impact on the city as a whole, and not just the individual that partakes in our project.

At the moment there are multiple types of software and methods that try to analyze people mood and access how they're feeling based on body language alone. Using things like webcam to track movement and pre-programmed actions. 




A chart of positive / negative body language



An example is Antonio Camurri of the University of Genoa in Italy. He and his colleagues have built a system which uses the depth-sensing, motion-capture camera in Microsoft’s Kinect to determine the emotion conveyed by a person’s body movements. Using computers to capture emotions has been done before, but typically focuses on facial analysis or voice recording. Reading someone’s emotional state from the way they walk across a room, or their posture at that time.
The system uses the Kinect camera to build a stick figure representation of a person that includes information on how their head, torso, hands and shoulders are moving. Software looks for body positions and movements widely recognised in psychology as indicative of certain emotional states.
Below is a link to an article on Antonio and the University's achievement. 

Saturday, 14 November 2015

Varying Levels of User Interaction & Ambient Displays

Many researchers have been investigating ways to immerse users in an environment in which digital information is provided with varying levels of subtlety. An example of this would be the concept of ambient displays, which may provide digital information in the traditional format, but also utilise the user's surroundings and other senses to provide subtle information.

The following research paper, Ambient Displays: Turning Architectural Spaceinto an Interface between People and DigitalInformation, compared the use of a computer for gaining information to 'looking through a small window'. The authors noted that this method is limited in that it concentrates all of its information on the user's main area of focus. They propose that this experience can be enhanced by creating specialised environments where information exists all around the user.

One example the authors provided of such an environment is the ambientROOM. This was a project developed by the MIT Media Lab in 1997. It consisted of a fully enclosed room in which the user sat, surrounded by 'ambient media'. For example, a dot pattern on the wall would become busier depending on how many humans were detected in the area. The idea is that user would not fully focus their attention on this wallpaper, but this information would be subtly be tranferred to them through their peripheral vision.

This next report, Heuristic Evaluation of Ambient Displays, discusses how the effectiveness of these environments can be evaluated. This gives us a good idea of what features are important and how they should be implemented. The following bullet points outline some of these features, based off of the heuristics used in this report:

  • All information provided should be relevant
  • The display should be unobtrusive unless it requires full attention
  • The user should notice a change in data, and not that the display clashes with its environment
  • The display should be intuitive to minimise cognitive load
  • Changes in the display's state should be easily noticeable
  • It should be aesthetically pleasing
This final paper is in my opinion most relevant to our project, as it details the use of ambient displays in a public setting. It is quite possible that our project will be situated in such an environment, considering the directions we have been exploring (public data, public mood, collaboration, etc.).
It is called Interactive Public Ambient Displays: Transitioning from Implicitto Explicit, Public to Personal, Interaction with Multiple Users. The authors developed a system of displaying information in public, which provides more specific information depending on the user's engagement. For example, the system tracks how the user's body is oriented, so that if they are facing the screen they are assumed to be more engaged and are provided with more detailed information. 

Below is a video of this system in use:


These concepts could be beneficial to our project. If we are to base our project in a public setting, we might need a way to distinguish who is engaging with the project and who is simply passing by. In the area of data collection/representation, this provides us with a way of refining the data collected from that public environment, so that we are aware of the level of interaction the users were demonstrating.


External Links/References:


http://tmg-trackr.media.mit.edu/publishedmedia/Papers/314-Ambient%20Displays%20Turning%20Architectural/Published/PDF

http://tangible.media.mit.edu/project/ambientroom/

http://dl.acm.org.cit.idm.oclc.org/citation.cfm?id=1029656&CFID=565650252&CFTOKEN=48295749

http://dl.acm.org.cit.idm.oclc.org/citation.cfm?id=642642&CFID=565650252&CFTOKEN=48295749

Friday, 6 November 2015

Using MIDI control

Over the two weeks our aim has been to control a virtual keyboard using physical input, and also via data taken from the web.

For the physical input, I have been studying the use of MIDI, and using non-standard physical inputs, such as a computer mouse and Xbox 360 controller.

The physical hardware I am using is an Xbox 360 controller, which has standard button inputs, analogue inputs which detect range of motion, and pressure sensitive triggers. This allows a wide range of inputs, which I hope we can apply later in our project, with the aim to be able to use multiple arduino sensors such as heat, light etc. as the final aim.

The software I have been using is Ableton Live 9, a Digital Audio Workstation, which contains an infinite amount of virtual instruments, recognises MIDI input, and is also capable of MIDI mapping, meaning I can map controls to any part of the software (from playing a note on the virtual keyboard, changing volume, filters, eq's etc.)

Ableton has software in-built to recognise actual MIDI instruments, such as the APC 40, electronic pianos etc.,  but the problem I am trying to solve is how to use devices that are not designed to be used for MIDI control.

To bridge this gap, I found GlovePIE, which is a software that is used to map keyboard inputs to controllers for use with video games that do not have controller support as standard, but instead I will be using it to  to map any form of input, into MIDI, which is then put through an internal MIDI port, using LoopBe.

This means that the basic map of input is

Xbox Controller - USB - GlovePIE - LoopBe - Ableton Live.

Below are images of the software interaction.




midi.DeviceOut = 2  is where GlovePIE is sending the MIDI to, in this case channel 2.  LoopBe acts as the bridge, taking the input via channel 2 and sending it to Ableton Live.

midi.channel1.c4 = XInput.A means that when I press the 'A' button on my xbox controller, it activates the midi for the note A4. 


These are the notes where individual sounds, such as a snare, kick or other samples can be mapped.

Below is a screencap of these two interacting.

Below is a video of the controller in action







Tuesday, 27 October 2015

Communication without language / Importance of Context with Data / Crowd Sourced Data  



Communicating without language


The initial direction of our concept was to create an artistic visualisation, which also represented Data. While this is still a direction we want to pursue, there was one particular issue that was raised in a number of meetings. This was, if it was possible for our proposed visualisation to communicate the data we intended, in an instinctual way that didn't need explaining. By this I mean, the observer would understand the data being conveyed, without a need for context or a prior explanation.
Wether this be instantaneously or after a few moments of thought, the idea was for everyone to be able to interpret what they were seeing. Through instinct alone, using something that's ingrained in all of us.

As much as this idea appeals to all of us, it became clear after a few weeks of deliberation and brainstorming how difficult this task may be. This set me off trying to find what all humans have in common with regards communication and is it possible to portray data purely through human nature and instinct, a sort of international language.


This led me onto a number of articles which described using different mediums to communicate ideas and messages. The most interesting of which was a system of communication developed by Ajit Narayanan for children with autism that had issues with language and speech. 

Ajit Narayan TED Talk

I also discovered a number of other articles that explored the idea of non verbal communication, using mediums such as body language an facial expression, some of which we may be able to utilise in our project.

http://neuroanthropology.net/2010/07/21/life-without-language/

http://www.littlethingsmatter.com/blog/2011/02/24/communication-without-words/





Context


This is an aspect i had not fully appreciated until i had done some research into the area. The importance of this cannot be understated with Data. Without context you are essentially staring at meaningless imagery, shapes, colours or whatever the medlium of expression may be. A quote from an article written by Natan Yau for the BIGTHINK encapsulates this perfectly

Without context, data is useless, and any visualization you create with it will also be useless. Using data without knowing anything about it, other than the values themselves, is like hearing an abridged quote secondhand and then citing it as a main discussion point in an essay. It might be okay, but you risk finding out later that the speaker meant the opposite of what you thought.

This describes perfectly why we cannot understate its importance, or else we are majorly neglecting a massive part of what we are trying to achieve with Data Art. This is why it is so crutial to find a way to communicate what we are trying to represent, with whatever it is we eventually create. The article from the bigthink and another article about the importance of context are posted below.

http://bigthink.com/experts-corner/understanding-data-context



Crowd Sourced Data / Data Art


Since the beginning of the module, the main area the group has been most interested in is the idea of crowd sourced data.
we wanted to use Data about People in an area, and display that information to the people. I began researching about different methods of collecting Data and displaying it, and came across some very helpful and interesting articles.
My favourite of these were TED talks done by Aaron Koblin and Jer Thorp.
They gave me a more human and artistic outlook on data, and showed how it can be used to create some beautiful imagery while also portraying a message.

https://www.ted.com/talks/aaron_koblin?language=en#t-93545

Watching these videos gave me a deeper appreciation for using data for artistic purposes. I began looking at other artists who had incorporated data into their work.
This article from theatlantic described how people in artistic fields have began using the flow and movement of people to create beautiful pieces of artwork.

http://www.theatlantic.com/entertainment/archive/2015/05/the-rise-of-the-data-artist/392399/

Monday, 26 October 2015

Using Text Data to Manipulate Images & Sound (pt. 1)

I was given the task of extracting text data from a .txt file or equivalent, and using code to manipulate it to create images or sounds. This would form a foundation for our project, as once we know how to convert data into non text-based forms, we can start manipulating all sorts of data to do a variety of things.

Here is a link to my website, showing a visual representation of random values extracted from a text file: http://dindins.web44.net/data%20stuff/test.php

When I started work on this task I was initially inclined to use JavaScript to extract the text-file contents, because JavaScript is used to read XML files etc. and is what I planned to use to create the onscreen images. After several attempts at this however, I discovered that JavaScript is not capable of accessing local files for security reasons. In the end I had to resort to using PHP, which meant this process would only work if the text file was uploaded to a server. In future I hope to find a way of achieving these same results, but using files that are located locally on the users machine.

Below is a screenshot of some of the code to show my process. This process can be summarized as follows:

  •  I was able to access the text file on the server using PHP
  • I then stored this text in a JavaScript variable as a string
  • Next I used the 'split' method to separate the string into single numbers
  • I then created an array which contained each of these values
  • This array could then be randomized using a shuffle function



To create the visuals of this text I used JavaScript to write 'div' tags into the document. I used the knowledge I gained doing my previous weather data project to edit the colours of these divs depending on the random value from the text file assigned to them. Below is a screenshot of this code:


Here is an example of one arrangement of these values:


These values will randomize each time the page is refreshed.

One problem I encountered when making this is that I always seemed to get one value back that was 'undefined'. I can only assume that this issue lies with the way I wrote the 'for loop' but I am yet unsure. I hope to resolve this issue for later projects.

Wednesday, 14 October 2015

Data Visualisation Test

I decided to begin testing ways of acquiring usable data and how to use code to represent it as a graphic. Here is a link to a simple website I made with visualised data:

http://dindins.web44.net/weather/visualtest.php

The site consists of three red circles representing Cork Institute of Technology, University College Dublin, and Massachusetts Institute of Technology. The site reads the current temperature for each of these locations and adjusts the appearance of these circles accordingly. The higher the temperature, the larger the circles become, and the more saturated the colour becomes. Below is a screenshot of the site:


You can see that the circle for M.I.T. is the largest and most vibrantly coloured, which directly correlates to its high temperature. You can see the exact temperature figures in degrees celsius next to the place names.


 To do this I collected real-time weather data using The Dark Sky Weather Forecast API. Using php I input longitude and latitude coordinates from google maps to gather forecast data from. I then created containers whose style tags would adjust depending on the data. The current temperature for each of these location were stored in variables as numbers. I then manipulated these numbers by concatenating the variables with strings to correlate to the size (in pixels) of the containers and their RGB colour values. Below is a an example of the code that uses the temperature data variable to alter the style tag of a container:







Monday, 12 October 2015

Materials and Data Testing 1

I started to try and test out some stuff to start to get an idea of what we could achieve with this project and to get some hands on experience with some of the technologies and materials that we could potentially use. This would give us a better idea of what we are working with and what we would need to achieve the end goal of a kick ass project.

Arduino Test:

To start off i wanted to mess around with an Arduino to see what we could do with that in terms of gathering data and representing it. The idea i had for a basic data representation test was to try and use twitter hashtags as the data and have some kind of LED or censer react through the Arduino whenever a certain hashtag is used. The reason for this was that twitter hashtags is a basic from of live data that can be easily monitored and i wanted to see if i could get the Arduino reacting to live data. 

This test quickly fell apart after some research into how to get the Arduino to be able to read live data from the web. A wifi shield is necessary for the Arduino to be able to do this and i was not able to get one for the bases of this test. I was not able to run this test but i am looking more into getting Arduinos to read live web based data and it is something i will come back to.

Wifi Shield

After not being able to get an Arduino to read live web data, yet, i instead start to mess around with it, getting it to read live physical data, such as movement, and use that to trigger some sort of feedback.

What i did was created a basic system using a ultrasonic range detector, some LEDs and a buzzer, all connected to an Arduino and wrote a script that would read the distance that an object was away from the censor and the closer it came to the censor the LEDs would light up in a row and the buzzer would make a sound. The closer the object came the more LEDs would light up and the frequency of the noise would become higher.  

Arduino Setup


This basic test could be representative of something larger that we may plan to do in the future. being able to take live physical data like that, we could take things such as the movement of people on a street or the growth of a tree and represent that in something else like an art piece.

Video Demonstration


Materials Test: 

Non Newtonian Fluid

"A non-Newtonian fluid is a fluid with properties that differ in any way from those ofNewtonian fluids. Most commonly, the viscosity (the measure of a fluid's ability to resist gradual deformation by shear or tensile stresses) of non-Newtonian fluids is dependent on shear rate or shear rate history."

To start testing out some materials that could be used for a data representation project i started to look into materials that can react and change depending on different things, the logic being that when the data changes so do the materials. with this in mind i remembered seeing a video online about non Newtonian Fluid and how it reacts when it is moved and vibrated, so i decided to have a go off it and see whats what.

I thought this would be a good place to start as it is easy to make very accsessable to anyone with acsess to a grocry store. To make it all that is needed is corn flour and water, and some food die if you are felling adventurius. Mixing alot of cornflour, a small bit of water and a few drops of food die and i had it made.

Ingredients

Once it was made and i had to consistency right, so that is was liquid while it was stationary and solidified when it was moving, i was able to test what would have when it was subjected to vibrations. 



Placing some cling film around a speaker i had at home i then found an online tone generator that would let me pick the frequency of noise that would play from the speaker, the lower it was the more the speaker would rattle and the more the fluid would be moved around. I poured the liquid onto the speaker and ran different frequencies through it to see what would happen.




Result:

I cant even describe how disappointing the result was. The fluid reacted somewhat to the vibration but not enough really to create anything from. I was expecting some made shapes to form up in front of me like you see in videos online but nothing really happend. Weather i did something not wrong when making it, i don't know but i can at least say that we can move on from that as i don't think it will work as a material for our project.