Sunday, 29 November 2015

Using Technology to Incite Social Interaction and Affect Mood

Through our technical exploration and interest in public collaboration via music and visuals, we have become more interested in the social aspect of our project;  using technology to instigate social interaction, influence mood, and allow us to study the effects under the conditions of different social spaces.

Full Body Gestures
The basic concept for this would be to place an interface in public spaces, such as pedestrian streets, bus stops, even bathrooms; and use techonology which recognises gestures to create music and visuals based on these gestures.
Hand Gesture Examples


Various gestures would launch different audio samples, such as a drum beat, or bass line, along with visuals which  would play as long as a gesture is held.  The more participants involved, the more music would play, and more visuals created.  For example, one participant may launch drum samples with blue visuals; while the other launches bass samples with yellow visuals.

Through participation, social interaction is instigated, with more participants adding to the whole piece.

To solidify this social interaction, the piece would aim to affect the participants' mood , using specific colours,in a positive manner, and create an association of positive mood, with interaction in public space.  Colour and its affectation of human mood, productivity and social interactivity have been studied in depth, and has been proven to have a large effect on us, with colours such as red making us more likely to be more aggressive, or certain shades of green increasing productivity.

Colour chart and emotions associated with them.


The actual installation itself would use motion tracking technology via the use of readily available hardware such as a standard webcam, (or microsoft kinect, which would allow for a wider variety of gesture tracking). Motion tracking software such as Motion Studio, and Max for Ableton which would allow us to convert gestures into MIDI controls which would in turn launch visuals and audio based on these gestures.

 Visuals displayed via screens or projections, with most of the 'work' being done by the software.  This means that the installation would not require large amount of space and could be placed in a multitude of locations.

Wednesday, 25 November 2015

Sense of Community / Location

It seems the bigger the city, the less likely people are to engage with their surroundings and other people around them. This is something I'm sure most people are aware of, but the question is why. This is what many cities and sociologists are trying to figure out, and subsequently reduce. Thankfully we don't experience this phenomenon to a huge extent growing up in Ireland, but in major cities across the world there exists a complete lack of community. As a result of this people are reluctant to connect and cooperate with one another, and this can have a knock on effect in many areas such a crime or racism. A lack of communication with one another results in a lack of understanding and empathy as people can become indifferent to others around them. This is why it is massively important to try create a greater group mentality, rather than thinking indifferently as individuals. 
Projects like the one we are proposing could help promote engagement with others, and in turn a sense of community and association. It could not only have an impact on the mood and attitude of an individual, but have a domino effect on many in an area. This makes the project very interesting from the point of view of mass psychology and sociological behavior.





While developing the prototype and finalising the idea/prototype is the ultimate priority at the moment, it is also crutial we select the correct place for the artifact shall we decide to display it in a public place. 

It would be pointless to develop a great idea designed for people to interact with, and have no on interact with it. Recently I've been reading about how people interact in public spaces and cities, and how people's surroundings can influence their behaviour, and as a result how they affect their surroundings. I've also been observing different public places in Cork City, seeing were would be an appropriate location to set up.
The area need to be away from busy streets and the hustle and bustle of large groups, as people will take no notice, however it also can't in an area so quiet no one will notice it.
Ideally it would be in an area where people are relaxed enough to engage with something like this, while also having the time to spontaneously interact.
That's why an ideal location would be a park or park entrance. This is where people go in a time of leisure to relax, and usually see a lot of footfall, especially in the city centre.

From what observed around the city, areas like Bishop Lucy Park, Fitzgeralds Park, and the benches near the memorial on the South Mall are ideal prospects. This is not only due to the amount of people which pass through them on a daily basis, but because of the relaxed and friendly atmosphere they naturally posses. They are full of people socialising and engaging with one another, and would therefore suit perfectly for the kind of setting to carry out our project.


People relaxing in Bishop Lucy Park



Linked below are a number of interesting articles I've read on The effect of Location on interaction, fostering interaction in cities and how to create public spaces to encourage people to interact. 



Creating public spaces which encourage people to interact

Below is a basic representation of the goal of creating a project like this. The effect it may have on the community, the data that could be collected from it, and the artistic benefits in terms of people coming together and creating music and imagery. 







Monday, 23 November 2015

Visual Display

I began to look into different types of visual displays that we could use in the project. Because the music will be one of the main features of the project i began of looking into ways of triggering visuals with music and that will interact with it.



MAX 7

In the process of looking for ways to make live visuals that will run to music i found Max 7.
Max is a visual programming language for music and multimedia developed and maintained by San Francisco-based software company Cycling '74. During its 20-year history, it has been used by composers, performers, software designers, researchers, and artists to create recordings, performances, and installations.
The Max program is modular. Most routines exist as shared libraries. An application programming interface (API) allows third-party development of new routines. Thus, Max has a large user base of programmers unaffiliated with Cycling '74 who enhance the software with commercial and non-commercial extensions to the program. Because of its extensible design and graphical user interface (GUI), which represents the program structure and the user interface as presented to the user simultaneously, Max has been described as the lingua franca for developing interactive music performance software.
Max 7 will let you create your own visuals and plug ins for any compatible DAWs. These could be used in tandem with clips launched for Ableton live and also with the gesture capturing technology

                                  Audio-Visual Feedback System by Max/MSP


Features:
  • Full support for MIDI devices and modern audio hardware.
  • Limitless audio options including basic DSP building blocks, VST, Audio Units, and Max for Live devices.
  • Flexible support for multi-channel audio.
  • Realtime input from webcams, digitizers, and built-in hardware.
  • Serial and HID support for a wide variety of electronic prototyping boards and controllers.
  • Interactive OpenGL graphics and GLSL shaders, including realtime shadows.
  • Support for multiple displays and tools for live projection.
  • Efficient realtime HD playback and hardware-accelerated image processing.
  • Transcoding and interaction between audio, video, graphics, and control data.

Max for Live
After researching what Max 7 was capable of and how we could use it i came across Max for Live. Max for live is version of Max 7 that is specifically  made for programming within Ableton. The advantage of this is that they are plenty of programs already made for Max for Live that can be downloaded.







Max for Live comes with a great collection of instruments, effects, and tools. And there’s even more available from the dedicated community of artists and builders who share their Max for Live creations.
Every Max for Live device is ready to use in your own music, but can also be edited and customized to suit your specific needs. And because Max for Live is part of Ableton Suite, it’s perfectly integrated into the familiar Live workflow.
Max for Live lets you build your own devices for use in Live. Create custom synthesizers, samplers, sequencers, audio effects, and much more. Max for Live also allows you to build devices that modify Live itself, including the properties of tracks, clips, and native Live devices.
Every Max for Live device includes an Edit button, allowing you to look at (or modify) how the device was made. And Max for Live comes with a collection of Live’s native interface elements, so you can build devices that look and feel just like Live.
Max 7 introduces a number of new features that make patching easier and more powerful. And all of them are available in Max for Live.
In Max 7, the interface has been redesigned, allowing for easier patching. Audio quality has been improved with a 64-bit audio engine and improved filter design tools. Additional features include enhanced OpenGL support, including a new physics engine and support for Gen, an add-on that compiles patches into code for improved performance.
On maxforlive.com users contribute to an online library of free Max for Live Devices that you can use for no charge by sharing .amxd files or links (known as "references" on maxforlive.com) to download or purchase them elsewhere.







Monday, 16 November 2015

Using Text Data to Manipulate Images & Sound (pt. 2)

Following on from my previous experiment, my next idea was to use similar text-data to manipulate sound, as this would tie into our data-controlled MIDI concept.

What I created was a page which used the same functionality as the last one (creating coloured images which change depending on data), but accompanied by sound. Three 'notes' will appear onscreen at any one time. These notes are randomly selected from a range of numbers in a separate text document. Each of these numbers correlates to a sound file in another folder; the higher the number, the higher the sound. The sounds I used are royalty-free recordings of piano keys. These three notes will play at once when the page is loaded, forming a sort of chord, though due to the random nature of the note selection process this chord is usually rather discordant.

This project can be found here: http://dindins.web44.net/data%20stuff/test2.php

Initially the site would automatically refresh every few seconds so that the sounds would randomise continuously, but I removed this feature because I felt it was unnecessary and annoying. The page can be refreshed manually for the same result and a new 'chord' each time.

Below is a screenshot of the site in action:



As you can see, as with the previous test, the values extracted from the text file correlate to colour, as well as sound. The higher the number in the note's div tag, the more saturated/brighter the green. 

The next image will briefly explain how this was all achieved. The first section of code seen uses the same concatination approach as last time to style the colours of the div elements according to their note value. The second section shows how the audio files were launched. I used html5's audio tag functionality to add the sounds to the page. A similar method is used to the styling, where a random number is taken from the previously declared array, and related to a sound file with the same number in its filename.




This test suffers from the same 'undefined' issue that I experienced in the last one, but was otherwise quite successful. This test bridges the gap somewhat between our concepts of data gathering data and using music as an output or feature, as it utilises both of these ideas.




Sunday, 15 November 2015

Positive body language / Gauging mood


There many ways of accessing the mood of people without the use of language. Often the posture and movement of a person is all you need to tell wether they're feeling on top of the world or down in the dumps. 
With this project we intend to use software to essentially measure the type of mood a person is in, by taking factors like stance and arm position into consideration. We then want to influence this for the better, and see what kind of impact we can have on the individual and a city as a whole. 

For us, we can easily judge how someones feeling just by body language, it's human nature. However it is a little more complex when it comes to commuters as they have no intuition when it comes to human interaction. Therefore we have to program me it with certain patterns and shapes that someone in a particular mood would make.
For example a person with positive body language, in a good mood, might make a power stance with their arms in the air out wide, almost in celebration.



Mick Jagger did this……. a lot.


This kind of body language could be rewarded by positive imagery or audio, or alternatively, produce this kind of pleasant visual and acoustic art that would be a result of the persons participation. This effect could be huge if it could somehow affect the persons body language after they had interacted with the project, as they may carry that into the rest of their day. If everyone walked away from it with a more friendly and open energy it could influence others around them. 
A kind of domino effect like this could have a massively positive impact on the city as a whole, and not just the individual that partakes in our project.

At the moment there are multiple types of software and methods that try to analyze people mood and access how they're feeling based on body language alone. Using things like webcam to track movement and pre-programmed actions. 




A chart of positive / negative body language



An example is Antonio Camurri of the University of Genoa in Italy. He and his colleagues have built a system which uses the depth-sensing, motion-capture camera in Microsoft’s Kinect to determine the emotion conveyed by a person’s body movements. Using computers to capture emotions has been done before, but typically focuses on facial analysis or voice recording. Reading someone’s emotional state from the way they walk across a room, or their posture at that time.
The system uses the Kinect camera to build a stick figure representation of a person that includes information on how their head, torso, hands and shoulders are moving. Software looks for body positions and movements widely recognised in psychology as indicative of certain emotional states.
Below is a link to an article on Antonio and the University's achievement. 

Saturday, 14 November 2015

Varying Levels of User Interaction & Ambient Displays

Many researchers have been investigating ways to immerse users in an environment in which digital information is provided with varying levels of subtlety. An example of this would be the concept of ambient displays, which may provide digital information in the traditional format, but also utilise the user's surroundings and other senses to provide subtle information.

The following research paper, Ambient Displays: Turning Architectural Spaceinto an Interface between People and DigitalInformation, compared the use of a computer for gaining information to 'looking through a small window'. The authors noted that this method is limited in that it concentrates all of its information on the user's main area of focus. They propose that this experience can be enhanced by creating specialised environments where information exists all around the user.

One example the authors provided of such an environment is the ambientROOM. This was a project developed by the MIT Media Lab in 1997. It consisted of a fully enclosed room in which the user sat, surrounded by 'ambient media'. For example, a dot pattern on the wall would become busier depending on how many humans were detected in the area. The idea is that user would not fully focus their attention on this wallpaper, but this information would be subtly be tranferred to them through their peripheral vision.

This next report, Heuristic Evaluation of Ambient Displays, discusses how the effectiveness of these environments can be evaluated. This gives us a good idea of what features are important and how they should be implemented. The following bullet points outline some of these features, based off of the heuristics used in this report:

  • All information provided should be relevant
  • The display should be unobtrusive unless it requires full attention
  • The user should notice a change in data, and not that the display clashes with its environment
  • The display should be intuitive to minimise cognitive load
  • Changes in the display's state should be easily noticeable
  • It should be aesthetically pleasing
This final paper is in my opinion most relevant to our project, as it details the use of ambient displays in a public setting. It is quite possible that our project will be situated in such an environment, considering the directions we have been exploring (public data, public mood, collaboration, etc.).
It is called Interactive Public Ambient Displays: Transitioning from Implicitto Explicit, Public to Personal, Interaction with Multiple Users. The authors developed a system of displaying information in public, which provides more specific information depending on the user's engagement. For example, the system tracks how the user's body is oriented, so that if they are facing the screen they are assumed to be more engaged and are provided with more detailed information. 

Below is a video of this system in use:


These concepts could be beneficial to our project. If we are to base our project in a public setting, we might need a way to distinguish who is engaging with the project and who is simply passing by. In the area of data collection/representation, this provides us with a way of refining the data collected from that public environment, so that we are aware of the level of interaction the users were demonstrating.


External Links/References:


http://tmg-trackr.media.mit.edu/publishedmedia/Papers/314-Ambient%20Displays%20Turning%20Architectural/Published/PDF

http://tangible.media.mit.edu/project/ambientroom/

http://dl.acm.org.cit.idm.oclc.org/citation.cfm?id=1029656&CFID=565650252&CFTOKEN=48295749

http://dl.acm.org.cit.idm.oclc.org/citation.cfm?id=642642&CFID=565650252&CFTOKEN=48295749

Friday, 6 November 2015

Using MIDI control

Over the two weeks our aim has been to control a virtual keyboard using physical input, and also via data taken from the web.

For the physical input, I have been studying the use of MIDI, and using non-standard physical inputs, such as a computer mouse and Xbox 360 controller.

The physical hardware I am using is an Xbox 360 controller, which has standard button inputs, analogue inputs which detect range of motion, and pressure sensitive triggers. This allows a wide range of inputs, which I hope we can apply later in our project, with the aim to be able to use multiple arduino sensors such as heat, light etc. as the final aim.

The software I have been using is Ableton Live 9, a Digital Audio Workstation, which contains an infinite amount of virtual instruments, recognises MIDI input, and is also capable of MIDI mapping, meaning I can map controls to any part of the software (from playing a note on the virtual keyboard, changing volume, filters, eq's etc.)

Ableton has software in-built to recognise actual MIDI instruments, such as the APC 40, electronic pianos etc.,  but the problem I am trying to solve is how to use devices that are not designed to be used for MIDI control.

To bridge this gap, I found GlovePIE, which is a software that is used to map keyboard inputs to controllers for use with video games that do not have controller support as standard, but instead I will be using it to  to map any form of input, into MIDI, which is then put through an internal MIDI port, using LoopBe.

This means that the basic map of input is

Xbox Controller - USB - GlovePIE - LoopBe - Ableton Live.

Below are images of the software interaction.




midi.DeviceOut = 2  is where GlovePIE is sending the MIDI to, in this case channel 2.  LoopBe acts as the bridge, taking the input via channel 2 and sending it to Ableton Live.

midi.channel1.c4 = XInput.A means that when I press the 'A' button on my xbox controller, it activates the midi for the note A4. 


These are the notes where individual sounds, such as a snare, kick or other samples can be mapped.

Below is a screencap of these two interacting.

Below is a video of the controller in action