Ever notice how the next generation is better at using the technology of their day than the previous generation? I don’t watch cable TV, but my dad does. I’ve tried to teach him how to use Hulu, Netflix, etc., but he resists. It’s not that Hulu and Netflix don’t have the movies he wants to watch. They have Charade. They have Gentlemen Prefer Blondes. And my dad could watch these movies without messing with his VHS or waiting for late night oldies on commercial TV. I’ve seen him watching The Cosby Show at 1 a.m., sitting through about 10 minutes worth of commercials!
Why? It’s a comfort issue. He’s already put in the effort to learn the interface languages of the tools and software of VHS and, to a lesser degree, DVDs. The laws of inertia have taken hold. He doesn’t want to learn a new interface language.
In the same way, those of us who grew up in the keyboard interface generation aren’t going to switch over to a voice-recognition interface any more easily than my Dad will move into the digital TV era. We’re more familiar and comfortable with keystrokes than voice issuing commands. Voice-recognition systems don’t work like natural language. Adapting to voice recognition is time consuming, tedious & frustrating. Even in the future, nothing works.
It requires learning specific commands. And your elocution must be flawless. I grew up talking to people and can communicate with them (at least enough to get through the day), but that doesn’t enable me to communicate with a robot. Their automated systems don’t learn new words or infer the meaning of unknown words by analyzing context phrasing. In short, there are human elements that Siri simply can’t replicate … yet.
IBM has made amazing strides with natural language and automated learning systems. Its new invention Watson dazzled millions with his (did I just give that automated software system a gender?) performance on the game show Jeopardy! But even Watson doesn’t deliver the right answer every time. Even for simple answers, Watson requires a great deal of associated information within the database he looks at to make decisions about what’s germane and what’s irrelevant.
Sound familiar? It should. That’s the same process employed by search engines; their algorithms are very similar to Watson’s. Search engines need us to classify and associate information on a massive scale in order for them to work effectively. That process, is known as search engine optimization.
It’s what I do. It’s what everyone here at Slingshot SEO does, in one form or another. Our goal is not to manipulate search engines. It is focused on making our clients digitally relevant to search algorithms through content, references from across the web, classification, associative information, user approval and supporting engagement, thought leadership, process analysis and capability reviews. It all boils down to Content, Links, Architecture, Social and Strategy. Without them, search engines don’t operate very well. Search engines, including Watson and Siri, require massive amounts of supporting documentation from us, the users, to operate properly.
Siri can locate things that are near me. It can find me the three Chinese restaurants closest to my location. But it doesn’t find me the three best Chinese restaurants according to my personal taste and it doesn’t tell me which of those three restaurants have garlic noodles. I don’t use my phone to find the Chinese restaurant closest to me; I use my phone to find great Chinese restaurants near me. I’m not driving 150 miles for the greatest chow mein in Indiana, but I might drive 25 miles for the second best. It depends on whether it’s a workday evening or a weekend, whether the weather is fine, whether I’m meeting friends there or grabbing a quick bite before meeting them elsewhere.
Siri needs to get good at modifying preferences with geo-location and make suggestions tempered by travel times, price and other preferences.
Siri and Google might both start by tracking people who consistently share my opinions about restaurants, shops, events, etc. near my location. Neither platform has that personalization … yet. But both platforms are definitely on that path. I just think it’ll be another 10 to 15 years before keystroke or voice command automated systems match not just the level of my friends’ capacity to recommend things they believe I will like, but match the level of my closest friends who really know what I like.
There’s been a lot of chatter lately about Siri being the signal of the end of search marketing and SEO. Siri doesn’t replace search engines. Heck, it doesn’t even replace Yelp and Urban Spoon … yet. Someday, it may mine data from those sites, delivering me ever more personalized results — far beyond the capability of today’s ‘personalized’ search results. But none of that will spell the end of SEO.
Siri isn’t developing a new database; it’s developing a new, noisier user interface. And for the moment, it’s not ready for prime time, folks.
The next gen may be more apt to use Siri or voice recognition, but it’s going to take a significant amount of effort and investment from Apple and others before that happens. Microsoft had to put a computer in every classroom before everyone really learned to type. For the moment, keystroke entry search is still the process of choice. And should voice-activated search become popular in the future, I expect there will still be plenty of places where we don’t want to let everyone know what we’re searching for or hear everyone else in the airport, office, bus, grocery store chattering at their devices; the cacophony alone will likely relegate this technology to select locations and occasions only.