Those of you following the rumors around tomorrow’s highly anticipated Mac Tablet unveiling are likely expecting an oversized iPod Touch, ideal for e-reading, videos and gaming. Consensus points to a device running a slightly modified iPhone OS, so current app store applications will find additional real estate.
There’s also been a rumor that iPhone SDK 4.0 will be released, finally introducing true background application functionality. This would bring the iPhone OS on-par with Android, overcoming iPhone’s current limitation of “a single app at a time”. This is important for a slew of powerful “agent” apps that would alert you to nearby friends and retail offers (a buck off your Frappuccino if your walk across the street now!).
Getting back to the title of this post, I’d like to suggest a more important reason for a background-application OS feature to launch alongside the tablet. If background applications are indeed supported, it’s not a big stretch to assume that multiple applications would be supported in the foreground. And that’s what I mean by the iPad having 2 screens, similar to the Microsoft Courier concept video you may have seen.
To visualize this, imagine turning your iPad 90 degrees into landscape mode and clicking a button causing a spiral bound to appear down the screen center. Open the Contacts app on the left side. Open the Map app on the right side. Drag a contact from the contact app into the map app and voila, you see where they live.
Another example – open Safari on the left side and bring up your favorite restaurant blog. Launch the Opentable app on the right side. Drag a restaurant name from Safari into Opentable, which lists reservation availability.
And there you have it, 2 screens delivered through software! Of course, I’d still like to shut my tablet like I do my moleskin notebooks. I guess we’ll have to wait until iPad 2.0 in 2011 for that!
In grade school history class, we learned about the profound change experienced by civilization as we progressed from tribes of hunter-gatherers to more sedentary communities.
It seems ironic that today we observe the reverse shift, as we evolve from a (literally) plugged-in audience to mobile, interacting explorers.
I watched the N-Judah MUNI bus whiz by last night in San Francisco’s Sunset district. I could easily have caught it if I busted into a quick jog, but with two buddies behind me, I figured it wasn’t worth the collective effort.
Still waiting 25 minutes later in a chilly, damp fog, I regretted my decision.
If I could have frozen time at that “make or break” moment, I would have:
- Pulled out my iPhone
- Clicked my Nextbus bookmark and navigated through no less than 4 links to find the right stop and check the arrival time of the next bus (the routesy iphone app might be faster)
- Convinced my 2 friends to run!
In reality, that would have taken at least 2 minutes and the bus would have been long gone. So…we needed an ambient computing technology that understood my intention to catch that bus. It should deliver the “run or don’t run” response in a split-second. Complex stuff, but a reasonable guess could be made by these tidbits of information:
- I took public transport earlier in the day to the Bluegrass festival in Golden Gate Park
- I just finished dinner and at 10:30 PM, was likely heading home (in fact, I told that to my wife on the phone just minutes before)
- Neither of my 2 friends had a car
- We were walking towards a popular bus route
Not sure what this ambient technology might look like, but it probably involves my phone, some communication protocol with the incoming bus, and maybe some supplied context on my part. Probably not through high effort keyboard input, but maybe a quick voice command like “taking N-Judah home”.
I’d hope for a response like “run now! Next bus won’t arrive for 30 minutes” or maybe an orb-like display color coded with green for “take your time” or red for “run now!”.
Be sure to read Harvey Feldspar’s recent geoblog about the impact of location-awareness:
“Hyperlocality is transforming our lives at every scale: bodyware, roomware, streetware, cityware, nationware, and global ware. From nano to astro!”
A mobile social network can recommend events of interest by analyzing information from users with similar profiles. For example:
Miguel, a gay 30 year old New Yorker vacationing in San Francisco, wakes up on Sunday morning wondering what to do. His cell phone beeps with a text message suggesting that he visits Dolores Park later that afternoon.
This suggestion was made because his social network has data that on hot and sunny Sunday afternoons in San Francisco, hundreds of gay men in a social network spend time in Dolores Park.
Let’s deconstruct how the social network arrived at this conclusion:
- In the past 6 months, hundreds of people used their mobile device to access the network from Dolores Park. It knows this because the phone communicates the user’s GPS coordinate. Many phones already have this capability, either built in or through a bluetooth connection with a GPS.
- Whenever a member accesses the social network, it logs the time. Many of these accesses from Dolores Park occur on Sunday afternoons, between 2pm and 5pm.
- On each access, the network contacts an online weather service and logs the weather condition and temperature. Many of these accesses from Dolores Park on Sunday afternoons occur on sunny days above 65°.
- Many of these network accesses correspond to gay men between 25 and 40 years old, as specified in their user profile under “orientation” and “age”.
By collecting data about how and when members use a service, social networks can creatively analyze and find patterns useful to the community. Furthermore, when Miguel arrives at Dolores Park, I would expect the network to facilitate a meeting with other like-minded members nearby.
Update: Related to this, check out this posting describing GyPSii, a social network that tracks users’ GPS location
At Mobile Monday earlier this week, Ajit Jaokar focused on UGC (user generated content), what he calls the “holy grail” of mobile web 2.0.
Instead of waiting for carriers to uniformly open up GPS triangulation to consumers through their APIs, he encouraged developers to take advantage of the falling prices of GPS components. They should consider using a pocket bluetooth GPS to interact effectively with their cell-phone hosted location-aware UGC application.
What kind of applications?
- Identify unknown images in a photograph using historic tagged metadata. Ajit describes a related thought experiment on his blog, but the gist of it is that you can search a photo website for previous photo tags at the same location to determine blurry features in your own picture.
- Infer memorable events by looking for data thresholds. This is a bit of a thought experiment as well and certainly unnecessary given the invention of AM radio, but hear me out. I’m not intimately familiar with Flickr’s API, but imagine the following:
- Let’s assume we live in a world where photos are instantaneously uploaded to a photo website from a wireless capable camera.
- You setup a location-sensitive “trigger” to send you an SMS to your cell phone Twitter-style.
- You setup a Flickr “rule” like “Send me a text message when Flickr receives 500 photos in a 10 second interval from AT&T Park (where the San Francisco Giants play baseball) over the next 10 days”
You’ve just subscribed to be the first one to know when Barry Bonds beats Hank Aaron’s home run record! Furthermore, you could subscribe to the feed associated with those pictures.
I believe that this concept is powerful and can be more generalized. I could subscribe to any Flickr photo feed when a certain threshold of pictures taken has been exceeded in a short time interval. Even if we remove that futuristic assumption that photos are immediately beamed to Flickr, the rule is still useful as long as the GPS coordinate and timestamp are preserved upon upload.
- Integrate olfactory devices and GPS (I am half joking here). By mashing up smells and locations, one can broadcast the existence of restaurants, gas leaks, fires, etc. It sure smells like UGC
To summarize, by capturing aggregated user generated content in the form of tags, time, subject context and smell (!), we can infer potentially useful information.