» 2014 » September
robert | September 18, 2014 ‐ No Comments
When documentary makers and others talk about “data telling a story” they’re usually referring to data visualizations or data exploration tools like this from Watch Dogs. But with the rise of ubiquitous computing creating a range of initiatives – smart cities (also connected cities), connected homes, wearables, the Internet of Things (IoT) – the world is turning into a mesh of inputs and outputs that creates a programmable transmedia storytelling layer.
One way to imagine this is to think of a single computer image that we know is made up of millions of pixels. Now imagine that the world is the image and each pixel is represented by a computing device. Just as each pixel is individually addressable and must be changed in coordination with the other pixels to present a new image, so each connected computing device can be changed to create a new reality.
Pixel stands for “picture element”. I’m going to use stel to stand for “storytelling element” which is physically represented by any addressable digital technology that might send or receive data for the purpose of communicating a coherent story.
Usually in transmedia storytelling we talk about channels (video, audio, image), platforms (YouTube, Spotify, Flickr) and formats (pervasive game, treasure hunt, immersive theatre). To try to be specific about the “stel”, we could say that it’s a single multi-channel device but on its own not a platform – stels must be meshed together to create an addressable platform which, for the sake of this blog post we could unimaginatively call StelNet.
In this example, StelNet represents 1000 fixed location multi-channel devices across London and 500,000 mobile stels worn by participating, opt-in audience members. The mobile stels range from Fitbits and Android watches to purpose-made StelWear for those who’ve signed up for maximum immersion.
StelNet is capable of outwardly communicating mood to citizens through color, sound and image and inwardly communicating the mood of citizens through noise level & frequency, traffic flow, air pollution, weather conditions, size of congregations outside pubs and public spaces, personal biometrics and of course location-filtered sentiment on social media. Using an invisible, coordinating, storytelling intelligence (such as Conducttr) the experience designer can tell broadcast and personalized stories across the mesh of devices.
John, for example, plays Gratitude World 5 (GWV) a long-running sci-fi strategy game in which he must build a self-sustaining community on a space station orbiting the earth while repelling aliens who try to build bases across London. The aliens feed on negative energy and are hence hyper active during bad weather, traffic snarl-ups and reports of local council corruption. The GWV dashboard presents easy to digest game-related information that’s been gathered from StelNet allowing John to make intelligence decisions about when to plan cargo shipments, personnel transfers etc.
On his way to work, StelNel signals to John the status of his earlier decisions and the progress of the game. Wrist vibrations indicate the successful arrival of new supplies and the blue StelNet lights at bus stops show that for now the combat situation is under control. A nearby digital display, paid for by advertising, can be swiped to gain 90 second access to the community channel that shows John how his mood compares to the city as a whole. He’s found that smiling more and nodding to strangers helps his mood but also has a knock-on effect in raising the mood of the city. He needs more people to feel good about themselves today to prevent aliens re-establishing a command post near his fictional recruitment site.
Utilizing imagination and well-timed cues across a city of connected devices, many people will soon be living in multiple alternative realities.
Come discuss these ideas and more at the Conducttr Conference, Oct 17th in London, UK
robert | September 17, 2014 ‐ No Comments
At the time of recording the project is funded to $34,000 of it’s $40,000 goal on Kickstarter https://www.kickstarter.com/projects/division66/the-black-watchmen-game-and-comic-book
robert | September 12, 2014 ‐ No Comments
Congestion causes problems from damage to the environment to deaths from accidents and from ambulances not being able to make it through. For Learn Do Share in London we created simple demo to show how someone might generate conversation around the topic.
Imagine a Scalextric track in high street windows across the city where the drivers of the toy cars came alive on Twitter. The drivers tweet their fictional, dramatized journeys from home to work and reveal the perils of congestion. Using Conducttr, the toy drivers can respond to audience tweets and can change their replies and daily broadcasts based on real traffic flow, weather, air quality and any other real-world inputs.
In the demo we created, every tweet received by the driver caused the car to loop once around the track.
Prep the controller
We used an Arduino controller with a GSM Shield to get Internet access. Optionally an LED was added that stays red while trying to establish a data connection and turns off once connected. We found that it could sometimes take ages to establish a connection. We used the bluevia SIM that came with the shield and also tested with a Tesco mobile SIM which worked fine too.
The Arduino sketch rotates the servo to a “go” position to move the car and then back to a “stop” position to stop it. The length of time in the “go” position determines how far the car travels around the track. With a little trial-and-error we got the delay right for a full lap but found it had to be tweaked with each assembly because of friction between the car and track which varied the distance covered.
Please find below the circuit diagrams for the Arduino and you can find the code and further discussion at the Conducttr SkunkWrx.
robert | September 9, 2014 ‐ No Comments
- show how stories can be told with audience participation
- develop creative muscles so that it becomes easier to include participation from the beginning.
We tested the cards in the office and then play tested twice – once at the Interactive Narratives meetup group and then again at Learn Do Share.
In an interesting development to the card game we added sensor cards to represent real-world inputs that could be used as game mechanics for the participatory experience.
- Divide cards into two or three decks:
- characteristics, tropes and optionally sensors
- Draw two cards from each deck and leave face down
- Grab a story from http://ineedaprompt.com
- Start the clock and turn over the cards! Now create a participatory experience that tells the story from (2) using the cards from (1)
- After time has expired, player(s) reveals the experience
i. Use the card mat on slide 12 as a prompt for structured answers
ii. Suggested time limits
- Solo – 3 mins
- Team that knows each other and knows transmedia storytelling – 8 mins
- All other teams – 25 mins
iii. Check out the Transmedia Playbook if you’re unsure what any of this about! http://www.slideshare.net/tstoryteller/playbook-online-v10
robert | September 3, 2014 ‐ No Comments
Presentation given by Robert Pratten (@robpratten) to the Alternate Reality Games conference (ARGFest) in Portland, OR.