Over the past couple of weeks, the first drafts of the PleaseReadMe story have been written, re-written and recorded. It’s a linear narrative, although some choices (that a reader takes or discovers by moving the physical book) take the reader away from the main path temporarily, before bringing them back to the main story line.
The story was initially written down in Pages from the structure and voice that have been bobbling in my head for months. This is the part I find very exposing: pulling words from the brain into the real world is fraught with danger. I’m reminded of this tweet by Annie Correal whenever I need a kick to just get on with it:
[blackbirdpie url=”https://twitter.com/anniecorreal/status/255059115531960320″%5D
And a similar tweet of good advice from Naomi Alderman:
‘… “just fucking write something” instead of waiting around for the perfect idea. seriously. just fucking write something.’
The words are then spoken and recorded – my voice being a stand in for the real thing while we build – and then the audio is cut into small chapters, so it can be triggered through time and gestures. I’ve discovered this is very much a paper-scissors-stone process where the writing influences the audio > influences the experience > influences the writing, and so on.
At each of these three stages – writing, recording, listening – I’ve been rewriting lines and sentences, adding pauses, adding more detail, taking out lines. A story can work when written down, but changes through the act of speaking the words out loud, and again when listening to the words.
At the same time, Mo Ramezanpoor has been coding on the iPhone side, creating a system to detect 3D gestures, triggering and unlocking new chapters of the story.
Mo has also created a story engine for developing and building the story, using a plist in Xcode. We’ve recently been thinking about timing and duration, so the audio pieces aren’t trigged simultaneously, or played over the top of each other if different gestures are using in quick succession by the reader.
We’ve also moved to using an iPhone to make quick prototypes for testing. Each version is built and run in the iPhone simulator via Xcode, which is connected to an iPhone in the real world. The process means we can quickly check gestures and story work together after any changes, rather than hooking up the Arduino each time. This has only been possible through Dave Addey’s work on getting all the components talking to each other: LilyPad Arduino, iPhone simulation, and an iPhone running the AccSim app, a simulated accelerometer.
Dave has written a great article about the process of getting the iPhone connected to the LilyPad Arduino over a wireless connection.
Next up, we’re looking at more complicated 3D gesture detection, to see if we can make all the gestures written into story. It’s going to need extra kit to detect all the data we want; the kit needs to be compact enough to be hidden in a book, yet allow us to capture all the data to make the story feel like magic for anyone picking up the book.
Once we’ve got that hardware cracked, we won’t be too far off testing the first early version.
Would anyone be interested in helping us test a talking book?