The Next Big Thingd?

Thingd (Thing Daemon) is building a structured database of every object in the world and then mapping those objects (and associated metadata) to people and to other objects.  The concept is still in its early stages of being realized, but it is a big ambitious idea and one worth thinking more about.

The easy (and slightly inaccurate) way to put Thingd in context is: Facebook organizes people, Google organizes information, and Thingd organizes things.  The lines get blurry though and I’ve written before about how Facebook and Hunch are creating massive collaborative filters that can improve recommendations for ecommerce and deliver more targeted content and ads across the web.

Thingd approaches the problem differently by focusing on the database itself.  It’s basically a utility in the way that twitter has been described as a utility.  It is a product whose core function is so basic that it can power a multitude of applications.  That is the promise and the reason for the excitement.

For example, Plastastic is a game for toy collectors that is built on the Thingd database (by Thingd).  Because it pulls structured data from Thingd, the site enables extremely granular browsing.  For example, you can browse for toys that are only 5.5″ tall.  More importantly, you signal your purchase interest by clicking “Have it” or “Want it” (similar to Like).  By “Wanting” lots of Handpainted Resin toys, Plastastic could show you other toys that people with similar tastes Like.  So, what’s exciting is not necessarily the collaborative filtering, but Thingd’s structured data (as long as it is of high quality).  Assuming an API is released, anyone could leverage Thingd’s structured data to build completely new kinds of web services.

Since Thingd’s database will theoretically include every object, it could empower anyone to very easily become a buyer and (passive) seller via its marketplace.  One could also imagine extending Thingd to other services: consider how Thingd could be integrated with Facebook profiles.  It would be far more useful than Facebook Marketplace.  Using image recognition, the database could also be used to power affiliate services on sites like Pinterest and Aprizi (similar to how Pixazza works).  While an affiliate business model is a first thought, other more interesting models could emerge.

Thingd also launched a platform for mobile developers called productids.org which provides access to >100 million UPC barcodes tied to Thingd’s database.  So, if you’re shopping for a bike at a store, imagine an app that could scan a bike’s barcode with your smartphone and then browse for similar bikes based on the specific attributes you care about (style, number of speeds, material, color, etc.).  You could check prices and availability online and at physical stores (via Milo).  An integration with Google Goggles would be even cooler: take a picture instead of scanning the barcode.

In addition to ecommerce business models, advertisers and publishers might also be interested in connecting to people based on users’ prospective (Want It) and historical (Have It) purchase habits.  As long as the structured data is consistent and clean (very difficult to achieve if attributes are crowdsourced), there is a lot a developer could do with a Thingd API.

The immediate challenge for Thingd is to continue improving the user experience while building and refining its database.  While the experience on Thingd.com isn’t seamless yet, the company recently launched Fancy, which is a lot more user-friendly than Thingd.com in allowing you to tag images and discover new stuff.  Fancy will make it easier for Thingd to collect data on more objects and there’s no doubt the company will be adding features as it rolls out (would be nice to have a bookmarklet for easier off-site tagging; small design tweaks like tiled images and infinite scrolling).

It’s clear that there is no shortage of options for Thingd, so the question will be product focus and execution.  I’m really excited to see how Thingd develops.  It’s working on a truly massive idea.

Deconstructing The Economist

Ozzy Osbourne in 2010.
Image via Wikipedia

The Economist is my favorite newspaper.  It’s also one of the only profitable news publishers.  Why is it successful while the rest of the industry is gasping for air?  It’s not because they’re extremely innovative in how they charge for or deliver content.  They don’t have a splashy online presence although they slowly waded into crafting online-only content that takes advantage of the web’s interactivity.  They don’t have a fancy iPhone or iPad app yet.  The simple reason for their success is the content itself.

There is no close substitute for The Economist‘s product and so people are willing to pay for it.  They cover business and political news with a breadth and depth that is unequaled, and because they are a weekly, they can chew a bit more on their stories before publishing.  Most importantly, they are transparent about their views (socially liberal, fiscally conservative).  They take a side on every issue but deliver arguments in an even-handed way, enabling the reader to agree or disagree with the facts and data presented.  They don’t even publish bylines, making it all about what is written and not who has written it.  Since I like their style, and would love to read more news written this way, I thought I’d briefly deconstruct a typical article to see how their editors tick.

Each article’s headline is brief and opaque (i.e. “Race to the bottom” or “An empire built on sand“) or non sequiturs (i.e. “No, these are special puppies” or “Cue the fish“).  Sometimes the headlines read like the Old Spice guy wrote them (“I’m on a horse”), but that’s part of dressing up what could otherwise be dull subject matter.  The non sequiturs force you to read on just so you can understand what they’re talking about.

The sub-header is all business.  There’s no fat and they read like well-crafted tweets.  For an article on genetic testing, one reads, “The personal genetic-testing industry is under fire, but happier days lie ahead”.  The subheader is usually one or two sentences and plainly states the paper’s view on the issue (“Google has joined Verizon in lobbying to erode net neutrality“).  I really like this method because at bottom all journalists have an opinion, so why not be transparent and lay it bare?  By laying out the conclusion in a couple sentences upfront, it also allows the reader to get something even if she’s just paging through.

Many times, the lede introduces the article with a story or quote.  In the article about personal genetic testing, it starts with a quote from Ozzy Osbourne, “By all accounts, I’m a medical miracle”.  It gets the reader interested and humanizes what would otherwise be a purely technical subject.

The next paragraph or two include a strong argument in support of the subheader.  In the genetic testing article, the author writes about how Osbourne “is not alone in wondering what mysteries genetic testing might unlock”.  It goes on to say that Google is invested in a genetic testing company and that Warren Buffett himself has been tested.  It discusses the promise of people understanding their risk for getting a disease and how that information holds great promise for treatment.  At this point, the reader may be wondering how anyone could oppose genetic testing.

Not so fast.  In typical Economist fashion, the next paragraph eviscerates the thesis of the article.  While the subheader supported genetic testing, the next paragraph cites a damning report from a credible source that found many test results were “misleading and of little or no practical use to consumers”.  Game over.  At this point, the reader is scratching her head thinking, “I thought testing was supposed to be a good thing”?!  By so willingly presenting a counter argument, The Economist strengthens the credibility of the view it supports in the subheader.  It’s an effective writing tactic and their use of it has made me a more critical reader, forcing me to always search for the other side of the story.

The bulk of the article then presents a series of facts from either side, filtering each one through the views presented in the subheader.  The facts are laid out, but placed in context.  The reader can decide which facts are convincing and which might be discarded.

The final paragraph is almost never definitive.  It summarizes the main arguments and concludes with an open question of whether or not The Economist‘s view will come to pass.  It’s not that they waffle, they’re just practical.  And they don’t play the prediction games that all the cable news talking heads play.  I think it’s a refreshing approach that lets readers decide on the future for themselves.

Enhanced by Zemanta

Coke’s API and the Outsourcing of Innovation

While he may not have realized it at the time, Asa Griggs Candler helped pioneer the “platform” business model used by thousands of web companies today.  But Chandler wasn’t a tech startup guy, he was the founder of The Coca-Cola Company.

Coke became a platform company almost by accident.  Beginning in 1886, Coke was principally sold to soda fountains and Candler saw little demand for bottled Coke.  Couldn’t you just get a Coke at your local drugstore?  Besides, creating a bottling and distribution operation was expensive, and lower-margin.  The Coca-Cola Company had great margins.  They didn’t even make the Coke that people drank.  They manufactured concentrated Coke syrup and marketed the final product.  It was the pharmacy soda fountains that added carbonated water to Coke’s concentrate to make the drink.

In the late 1890’s, Candler was approached by two Chattanooga businessmen who proposed creating a Coke bottling operation.  Candler signed a contract with the two men, giving them control of Coke bottling for one dollar (Candler never actually collected the dollar).  Whether Candler didn’t see the potential for bottled Coke or was nervous about the risks and costs associated with building a bottling operation in-house, the decision was incredibly beneficial to Coke over the long-term.  Coke was able to focus on its two core competencies while the technology and manufacturing processes required to bottle mass quantities of soda developed in parallel.  Coke benefited massively from the greater volume and distribution that bottlers enabled.

APIs provide a similar function in the programming world.  Platform companies such as Twitter have stuck to developing their core capabilities (their syrup and marketing) while enabling others to innovate around the Twitter APIs.  Basically, it’s a licensing strategy.  As of the Spring, Twitter was generating about 75% of its traffic through third-party clients (its bottlers) utilizing the Twitter APIs.

Apple’s app strategy for iOS is another example.  Apple developed a few core applications (iCal, Safari, Mail, etc.) for the iPhone platform and then enabled the creation of hundreds of thousands of applications via the iOS SDK.  This has enabled Apple to create enormous value for its platform without shouldering the costs of building thousands of applications in-house (which it couldn’t do with the same efficiency and creativity of its developer community anyway).

There are of course benefits and dangers to innovating on top of another company’s platform.  Primarily, there is a hold-up problem with this type of innovation.  For example, Coke could hold its bottlers hostage and force the bottlers to pay higher fees for concentrate.  On the other hand, bottlers could threaten to choke off Coke’s distribution.  The way to solve these hold-up problems and to improve manufacturer-supplier coordination is through vertical integration, and that’s exactly what we have seen from both Coke and Twitter.

Vertical integration allows a company to remove coordination problems and reduce costs by bringing the relevant supplier(s) or partner(s) in-house.  Coke did this by acquiring its North American bottling partner in February 2010.  Twitter did this by acquiring complementary functions such as search (Summize) and mobile (Tweetie).  These acquisitions reduced coordination problems for Twitter, enabling them to accelerate development of their roadmap.   Most importantly, both companies were able to maintain focus while enabling the ancillary innovation that would become critical to their long-term growth.

On a much broader level, the trend of outsourcing corporate R&D to venture-funded companies seems to be accelerating.  Steven Kaplan and Josh Lerner wrote a great paper explaining this trend, noting that venture-backed firms are three times as efficient in generating innovations as corporate research.  Incumbent (and upstart) technology companies can take advantage of this trend by providing entrepreneurs with the tools that accelerate innovation: API’s, open source software, and greater access to data/information.

What a 100 Year-Old Race to the South Pole Teaches Us About Design

Design informs most of what we come in contact with whether it be architecture, mobile devices, cars, software and web services, or a school’s curriculum.  Sentences are designed, edited down so they convey meaning with efficient elegance.  “Good” design delights with its simplicity, its flexibility and ease of use.

Design was on my mind while walking through the Museum of Natural History’s excellent exhibit on the race between Robert Falcon Scott and Roald Amundsen to first reach the South Pole (1911-1912).  Perhaps it’s a strange place to be thinking about design, but expeditions, especially those attempting to first reach the South Pole, are amazing crucibles for design.  Each team had to carefully select its route and take nearly everything with them: fuel, clothing, plenty of food for themselves and their animals, shelter, transportation, etc.  It was critical to design the expeditions so that they would be flexible enough to meet changing conditions.  In fact, Scott didn’t realize he was in a race until receiving a surprise telegram from Amundsen: “BEG LEAVE INFORM YOU PROCEEDING ANTARCTIC — AMUNDSEN”.

It is through this lens that we can see how each team’s preparation, experience, and design choices impacted their efforts.  Ultimately, Scott reached the pole only to find that Amundsen had beat him to it.  Freezing cold, frostbitten, and running short of supplies, Scott and his team lost their lives on the return.  Tragically, the remaining polar team was found only 11 miles short of their main depot’s relative safety.  On the other hand, Amundsen’s team successfully reached the pole before Scott without any loss of life.  How did this happen?  What can design teach us about these outcomes?  How can these lessons be applied to the less lethal, but similar challenges of building teams and operating companies?

Competing Goals vs. Singularity of Purpose
Scott’s Terra Nova Expedition had competing goals.  Not only were they seeking to reach the South Pole first, but also they had various scientific goals requiring additional manpower and equipment.  Scott’s expedition was well-publicized and he knew that the success of the expedition would hang on whether he reached the pole.  While the scientific work was important, it was ultimately a distraction.  Scott setup camp at Cape Evans since it was a better area for the scientific work they planned to complete.  However, it was 60 miles further from the pole than Amundsen’s camp on the Ross Ice Shelf.  Scott had already disadvantaged his team before the journey began.

On the other hand, the Amundsen Expedition designed itself with one, clear goal: reach the south pole first.  The route, equipment, team members’ skills, mode of transportation, food supply – everything – was selected for the sole purpose of reaching the pole first.  Amundsen fielded a small, agile team with only nine men, some with arctic experience and others who were completely green.  But they were built for speed and brought 52 dogs with them.  In contrast, Scott had 65 men (including the ship team) when only five would make the final trek to the pole.  In a showing of Amundsen’s focus, he took only two pictures the entire expedition while Scott’s team extensively documented their efforts and brought 35,000 cigars with them.

Perhaps even more important was that everyone on Amundsen’s expedition understood that there was only one goal.  This likely freed expedition members to make informed decisions without having to weigh any choice in the context of competing goals.  Tellingly, on the return from the pole and nearing exhaustion, Scott’s team added 30 pounds of geological specimens to their sledges.

Start Simple and Iterate
Scott’s team also had a complex transportation plan that involved ponies, dogs, three motorized sledges, and “man-hauling” (like it sounds: hauling your own supplies).  The motorized sledges cost 7x what the dogs and ponies cost combined, although three-quarters of the distance was completed with man-hauling.  The ponies were only used for the first 25% of the trip as the ponies were not suited to travel up the Beardmore glacier.  In an inauspicious beginning for Scott, one of the motorized sledges fell through the ice while being unloaded from the ship, and the remaining two were abandoned due to mechanical failures.

Amundsen’s team kept things simple.  They relied exclusively on dogs for transportation, calculating correctly that dogs would be able to make it over any terrain they would encounter.  Despite their affection for the dogs, Amundsen’s expedition relied on weaker dogs for food, both for the dog team and themselves.  Scott was reluctant to use dogs in this way although he didn’t shy from using the ponies for food.  Scott also ignored the expert advice of Fridtjof Nansen, the famous Norwegian explorer, who told Scott to bring “dogs, dogs, and more dogs”.  Scott received this advice while trialling his new motor sledges in Norway and, likely feeling the momentum of the sledges’ expense and the effort involved in developing them, decided to continue using them.  While Amundsen fed his dogs with seals and penguins, Scott was forced to bring the ponies’ food from England and carry the extra weight during the expedition.

Upon reaching Antarctica, Amundsen’s lead skier, Olav Bjaaland, redesigned the sledges, tents and footwear.  While Scott’s team used the same sledges, Bjaaland shaved the Norwegians’ sledges down, reducing each sledge’s weight from 165 pounds to 48 pounds.  Further, the boxes hauled on the sledges were designed so that their contents could be accessed without unloading them.  The Norwegians also soldered their fuel cans closed to eliminate evaporation.  Scott knew of the evaporation issue from his experience on an earlier expedition with Shackleton, but Scott’s expedition used cork plugs anyway, and were dismayed to find that significant amounts of fuel had evaporated by the time the team reached the depots.  Lastly, Amundsen outfitted his men with loose-fitting fur clothing that kept them warm and dry, a technique he picked up from his experience with the Inuit.  Scott selected closer-fitting windproof materials that trapped perspiration, leaving his team wetter and colder.  Amundsen enabled the innate talents of his small team to run while successfully drawing on his experience and the advice of others.  These were all fairly small design choices that, in combination, had a very positive impact on Amundsen’s chances.

A Grand Vision and Practical Steps to Achieve It
The grand vision was to achieve the pole first, but each expedition sent teams ahead to lay necessary route markers and set up depots.  The markers made it easier to navigate their respective routes and the depots provided food, fuel, and equipment in the field.  During critical stretches, Amundsen’s team methodically laid markers every mile, using pre-painted black food containers to show the way.  Closer to the pole, he erected 6-foot cairns every three miles which included a note indicating the cairn’s location, the direction to the next cairn, and the distance to the next supply depot.  These cairns acted as effective milestones for the team, aiding navigation and providing much-needed signals of progress.

Scott’s depots were laid out less regularly and were marked with one flag each.  Walls used to protect the ponies during lunch and night stops were used as markers, so there was no regular spacing to help with navigation.  Unlike Amundsen’s markers, Scott’s were laid further apart making it impossible to travel on inclement days that had poor visibility.  With a simpler and more structured design for route-marking, Scott’s team would have traveled regardless of most weather, and might have been saved.

Building a Team
Building an effective expedition team meant finding the right balance of skills and personalities.  In preparation for the expedition, Scott hired an engineer, Reginald Skelton, to create the motorized sledges.  However, when it came time to choose the expedition members, Scott bowed to the demands of his second-in-command, “Teddy” Evans, who objected to Skelton’s selection.  Evans took issue with the fact that Skelton out-ranked him in the British Navy – Evans did not want a more senior officer to overshadow his position.  Allowing this issue to become politicized seems to be a clear lack of Scott’s leadership, who should have found a place for Skelton and dealt with Evans’ concerns.  Without Skelton’s skills, two of the three motorized sledges had to be abandoned after running into mechanical issues that Skelton likely could have fixed.

The Norwegians were also accomplished skiers and were able to keep up with the dogs pulling the sledges.  The Norwegians knew how to care for their dogs as well, keeping track of mileage and being sure not to overwork them.  Amundsen even brought Bjaaland, a champion skier, to pace his team.  While Scott also brought a Norwegian skier to train the rest of the men to ski, Scott didn’t require his men to train.  This became a major hindrance as most of his British teammates had very little or no experience on skis.  This hampered Scott’s progress as the men awkwardly learned to ski while on the expedition, while hauling their supplies, too.  Scott’s lack of leadership and seeming willingness to let politics impact his selection of individuals with the appropriate skills put his whole expedition at a disadvantage.

Making Your Own Luck
These two expeditions captured my imagination with the details we have from Scott’s diary and Amundsen’s own account.  It’s a fascinating piece of history that offers some interesting lessons on how to design and lead teams in conditions harsher than most of us will ever experience.  In his book The South Pole, Amundsen concludes:

I may say that this is the greatest factor – the way in which the expedition is equipped – the way in which every difficulty is foreseen, and precautions taken for meeting or avoiding it. Victory awaits him who has everything in order – luck, people call it. Defeat is certain for him who has neglected to take the necessary precautions in time; this is called bad luck.

— Roald Amundsen

The defining principle I take away from Amundsen’s success is that if something can be done simply, it’s almost always preferable to a complex solution.  So, the next time someone offers you three motorized sledges (and nobody to fix them) for your polar expedition or 52 dogs, take the dogs.

“Should I Get an MBA?” and Why It’s Not about the (any) Degree

There has been a lot of discussion about the best educational background for founders and for those who want to join startups.  Earlier this month, TechCrunch ran an article from Vivek Wadhwa which argued in favor of an MBA education.  Similarly, Vinicius Vacanti (a former banker) wrote a great post, “5 Reasons Why I-Bankers Make Bad Tech Entrepreneurs“.  Steve Blank wrote a very practical and balanced post about whether folks should get an MBA or an Engineering Management degree.  Guy Kawasaki has pegged the value of an MBA to a post-college entrepreneur at negative $250k.

There’s clearly a lot of conflicting wisdom out there.  After reading some of these articles, my initial concern was that some of these perspectives could lead would-be entrepreneurs to question their ability to start a company based on whether they had the right background.  But the concern is misplaced because anyone who could be dissuaded from starting a company based on one person’s opinion probably isn’t ready anyway.

The bottom line is that if you are looking to start a company, formal education matters little: it is about what you can do.  And what you can do – if you’re talking about founding a tech startup – is all about your vision and passion for the product and your ability to execute.  For this reason, if you are deciding where to focus your formal education, I strongly suggest engineering.  As my dad told me, it’s difficult to learn “hard sciences” once you’re out of an undergraduate program (unless you’re building on a background in hard sciences).  While you might pick up a history book for leisure after graduating, you’re probably not going to relax by doing calculus or learning Cocoa.  This is not to say it’s impossible to gain these hard skills once you’re out of college (I’ve had to do a bit of this), it’s just easier to learn these things in an academic setting when you probably have more time and fewer real-life distractions.

I wouldn’t be overly concerned about gaining accounting and finance skills at first – these are very valuable, but again, the core value of a startup is not finance and accounting, it’s the product.  You’ll acquire business skills with experience and if your startup gains traction, you will be in a good position to raise money and attract the talented sales, BD, and finance folks needed to build the product into a moneymaking enterprise.

Assuming you have the knowledge, passion and ability required to start a company, get started on it.  You don’t need to get an expensive graduate degree.  And, if you have an expensive graduate degree, that doesn’t make you any less “able” as a founder or startup exec.  Everyone has innate abilities regardless of educational background and, if you’re building a founding team, it’s your job to find people who complement your abilities.  And if you’re a startup looking for exceptional people, it’s your job to recognize talent independent of a candidate’s pedigree or formal training.  Finding the right people is complicated: use too coarse a filter (Ivy-only, college-grad only, no MBAs, MBA-only) at your own risk.

Open Graph is Facebook’s Beacon Pivot

Facebook Beacon was the company’s much-maligned initiative that captured and broadcast off-Facebook browsing activities to one’s Facebook friends.  Signing up for a service, purchasing a product, adding an item to a wish list – all of these actions were automatically shared.  The way it worked was that the affiliate would call a JavaScript snippet from Facebook that captured and sent the user’s IP address, addresses of the pages the user browsed, and any actions taken on the partner site to Facebook (deleting FB cookies wouldn’t stop their ability to track this info).  This bit of code even tracked those without a Facebook account, assigning them a unique identifier.

Beacon was pulled due to privacy concerns.  The goal for Beacon was to create an ad/affiliate network that would allow Facebook to make specific recommendations based on users’ actions off-Facebook. Beacon was a way to capture users’ off-Facebook actions and broadcast those preferences (implicit recommendations) to users’ friends. The problem Beacon hoped to solve was one of “intent”: people go to Facebook to see what their friends are doing, not to research with the “intent” to make purchases (as when searching on Google).  Without “intent”, Facebook users don’t click on many ads and CPCs are low.  Alas, Beacon’s attempt to remedy the intent problem failed, but that may have been the best thing to have happened to Facebook.

Enter Open Graph
The failure of Beacon forced Facebook to rethink how to solve its “intent” problem and Open Graph is a brilliant if not worrisome solution.  Basically, Open Graph allows publishers to put “Like” buttons next to articles, products, blog posts, etc.  If you’re signed into Facebook, you can “Like” something on the publisher site and that gets posted on your Facebook page with a link back to the publisher.  Users will also see their friends’ actions on that publisher site.  Unlike the soon-to-be retired Facebook Connect, your Likes will not only be displayed in your Activity Stream, but also persistently stored against your Facebook profile.  Since Open Graph supports semantic markup of objects using RDF, Facebook will know that what you like is a book, song, band, etc. and not just a web page (as of today, the API doesn’t support multiple objects per page).  So, the idea is that Facebook learns and stores what you and your friends Like across the entire web.  The Open Graph API not only writes this information to your Facebook profile, but also allows a publisher to read your profile’s Likes in order to customize your experience on the publisher’s site.  As CEO Mark Zuckerberg explains, this would be pretty useful to a concert site looking to tap into the data FB stores against its users:

“…if you like a band on Pandora, that information can become part of the graph so that later if you visit a concert site, the site can tell you when the band you like is coming to your area. The power of the open graph is that it helps to create a smarter, personalized web that gets better with every action taken.”

Why Open Graph will Succeed Where Beacon Failed
The implications of Open Graph are extremely important.  Through user-generated “Likes”, Facebook will become the central repository for your and your friends’ preferences and that information will be used by FB and its partners to make recommendations (sell things) to you on- and, more importantly, off-Facebook.  Like Beacon, Open Graph attempts to leverage users’ off-Facebook actions so that FB can be there when the user has the “intent” to buy that concert ticket.  But Facebook learned its lessons from Beacon’s failed attempt to (some might say) surreptitiously track and broadcast users’ actions.  Unlike Beacon, Open Graph will succeed by giving “control” to users.  Namely, the Like button will get users to voluntarily share their Likes with friends.  Facebook will then use this information off-Facebook at the concert site when the user’s intent is to purchase tickets.  There is no lack of cunning in this Beacon pivot.

It seems Open Graph has all the ingredients for success.  Publishers will implement it to generate more traffic and improve monetization.  Users will enjoy seeing what their friends Like and will generally appreciate a more customized browsing experience.  Furthermore, it’s easy.  Users have been trained to click Like buttons all over the web and since these buttons are the lowest-common-denominator contribution (vs. rating, tweet, comment, review, picture, video, blog post, etc.), the barriers to participating are low.

Implications for the Taste Graph
Open Graph may also have implications for sites such as Hunch that provide recommendations (on- and off-Hunch) based on what other people like you enjoy (what Hunch calls the “taste graph“).  While less sophisticated (and less fun), Facebook’s Like button is similar to the “Teach Hunch About You” questions that give Hunch the data it requires to make recommendations.  Facebook’s clear advantage over Hunch is Facebook’s massive installed base of 500 million users that will attract publishers to implement Open Graph.  Nonetheless, publishers should consider the long-term implications of implementing Open Graph for a couple reasons.  First, by supporting alternatives to Open Graph, it preserves competition and will help drive continued innovation.  Second, Facebook doesn’t have much experience building collaborative filtering systems and it’s not clear whether a simple “Like” system can generate the type of data necessary to deliver effective recommendations (that drive conversions, etc.).

Regardless, Facebook is positioned extremely well.  They provide an increasingly compelling product to a huge and rapidly growing user base.  While it’s not clear whether Open Graph will be widely adopted, more thought and resources should be directed towards initiatives such as OpenLike and XAuth that could counter-balance Facebook’s awesome success.

One Way to Join a Startup and “Try Before You Buy” Hiring Practices

Earlier this week Bijan Sabet at Spark Capital wrote a post, “An inspiring way to join a startup“.  The post describes how a couple of companies in Spark’s portfolio had individuals offering to work for free in exchange for the possibility of a full-time position.  I don’t know what the exact circumstances were for each candidate, but if the candidates were switching industries and this was their first startup, I’m a little surprised that this path is considered unique.  Having lived this process as both a newbie and an employer, below is the story of how I got into the startup world and some thoughts on “try before you buy” hiring based on my experience.

In August 2001, after I had worked for a couple years at JPMorgan (my first job out of undergraduate), I finally decided to quit and join a startup.  I had known since before day one that I didn’t want to make a career out of banking.  To me banking was a two-year graduate school program that would provide much-needed skills in finance and accounting that I didn’t get while studying economics and Russian in college.  By the time I left, I thought I had the wind at my back.  I had been offered a promotion to Associate and naively figured it would be easy to get a startup job.  Wrong.

During my two years at JPMorgan I had been reading everything I could about the NY startup scene and keeping a spreadsheet of all the NYC-area startups.  I had even gone on a few interviews to test the waters.  About a month after I left, I was out in San Francisco visiting a friend and woke up early on September 11, 2001 to catch my flight back to New York.  I won’t recount that terrible day here, but needless to say I was in San Francisco for another week.  Returning to New York many days later, I arrived in a different city than I had left.

This was not a good time to be looking for a startup job.  New York was reeling from 9/11 and the internet bubble had burst.  Then I tore my ACL playing soccer at Chelsea Piers.  Awesome.  The day it happened, I had 24 hours left to send in my COBRA before it expired.  I had the surgery and then focused my efforts on getting a job at Colloquis (then called ActiveBuddy), a developer of natural language software systems.  I had already been following the company’s progress for some time.  To me, Colloquis’s technology was indistinguishable from magic.  I totally fell in love with it and decided I had to work there.  My initial application (through the company’s website) was rejected.  Then I had a stroke of luck and was introduced to the CEO, Steve Klein, through a mutual friend.  Finally, I got a meeting with him.  I remember it well.  Steve was firing questions at me, and at one point he asked, “how much did you make at JPMorgan?”.  I told him and he laughed.  “That’s ridiculous!”, he said.  “You’re two years out of school.  You have no real experience.  Nobody is going to pay you that outside of finance.”  I told him I knew that and that I wasn’t expecting a similar salary.  Of course, the other Colloquis-specific factor at play here was that the company was in major trouble.  Management had been recently replaced and the company was scrambling to close necessary financing.

Regardless, Steve was absolutely right.  Sidebar: there’s no chance in hell that a startup is going to pay someone with no operating experience a six-figure salary.  In fact, if you find a startup willing to pay you a six-figure salary when you have no experience, you should run in the other direction.  They’ll be toast in a year.

Despite my lack of experience, Steve saw how fired up I was and he gave me a shot.  I asked that we formalize the arrangement in a letter.  We worked out an offer letter outlining what my responsibilities would be, success metrics, etc.  While I agreed to work for free he offered to pay my expenses: cell, internet, etc.  It was April 2002 and the company was still trying to close its next financing round.  A few months after I began working, Steve unfortunately had to lay off 75% of the New York office.  It was terrible and those were the worst days, but the short story is that the remaining team turned the company around, built it back up, and we successfully sold Colloquis to Microsoft in 2006.  I started earning a salary around the time Steve pared down the company to five in NY (I was the only person left on the business side plus Steve), and by the time of the sale I was a member of the management team.  It was an amazing experience to have worked with such incredible people and to have lived through all the ups and downs with them.  If there is any lesson to be learned from this it is that sometimes you need to realize that you know nothing – you must take a few steps back before you can start climbing again.

Years later, and having the experience of hiring individuals as an employer, I am a big fan of “try before you buy” hiring practices, particularly for job seekers switching industries.  It’s very difficult to know how someone will perform until they’re actually working.  A period of purgatory gives a company the opportunity to get to know the individual, and likewise.  These trials can take many forms and don’t necessarily translate into working for free.  One path might be to give candidates a discrete paid (or unpaid) project to complete.  In any case, timing and objectives should be laid out in writing beforehand so that both parties go into the evaluation period with the same expectations.  While “try before you buy” is not possible (or even preferable) in many circumstances, it can be an effective tool to ensure that you get the right people on the bus.

Thoughts on Hiring a VP Sales and Why Rolodexes are Usually BS

Hiring a VP of Sales at a startup is challenging.  While others have written eloquently about when to hire a VP Sales, I will focus on what type of person to hire.  Having run a sales team, worked under a VP Sales and Business Development, hired a VP Sales, and served as an adviser to a VP Sales, I am beginning to see some patterns for success (but I don’t pretend to have all the answers).  In one case, I worked with a company that went through five VP Sales plus a handful of interim “consultant” types in a period of four years.  While I wasn’t able to get directly involved in the hiring decision until the last hire, I learned a ton in the process.  Here are some thoughts on why hiring a VP Sales can be so difficult, and some ideas around how to minimize the hiring risks.

Find a Missionary, Not a Mercenary
Larry Cheng wrote a great post contrasting missionary CEOs to mercenary CEOs.  It is obvious that at the CEO level a company requires a missionary leader.  Building a company is far too difficult and uncertain a process for a mercenary attitude to sustain one’s motivation.  While a missionary CEO is an obvious choice, finding a missionary VP Sales is perhaps more important because the internal expectations at a company are that the VP Sales is a mercenary with little concern for product or people outside sales.  As a result, a VP Sales who loves the people (see Cultural Fit below), product and market is great for company morale, particularly when the going (inevitably) gets tough.

Many will argue that having a singularly-motivated mercenary VP Sales is the right choice.  I have been involved with companies where VC board members have pounded the table in favor of hiring a senior “sales animal” without regard for other qualities.  I’m sure there are companies that have successful sales teams built around this type of person, but I just haven’t seen this strategy succeed.  Of course a company needs a smart and aggressive person to build the team and systems required to scale revenue, but while those ingredients are necessary they are not sufficient to create an authentic, resilient and (most importantly) productive sales practice.  To make the dough rise, your VP Sales must also be passionate about the people, technology, the market, and the company’s value proposition to customers.  This is especially true for companies that are still tweaking their product and value proposition.  In these cases, it’s important to have someone who can listen to customers and translate that feedback to the product, marketing, and development teams; in these cases, the sales team is the company’s ears.  In earlier-stage companies, this person doesn’t have to be a “VP Sales” – in fact hiring a VP Sales too early is an unfortunately common kiss of death.

Cultural Fit
In many cases, the CEO/Founder has been leading the business development and sales efforts up until the point at which the VP Sales is hired.  The VP Sales is brought in to scale the sales operations (personnel, comp plans, forecasting, systems, etc.) which might already include one or two account managers and someone running business development.  Taking the step to scale up sales can lead to a lot of internal strife if the cultural match is off.  If the VP Sales’ style is so different from that of the CEO/Founder, the culture can easily splinter into dysfunctional fiefdoms.  One thing I’ve found that seems to quickly divide cultures is curiosity, or the lack of it.  Founding teams are by nature curious people who enjoy solving problems.  It makes sense to hire for this trait in the sales function, too, particularly when your efforts are just taking shape.  You will want more of a BD person with technical and market knowledge who can execute a handful of deals with early partners/customers.  However, if the BD person is also the person you intend to have scale the sales team, that person must also have experience managing a team and using a data-driven approach (using Salesforce or similar) to build and manage a pipeline, determine conversion rates, etc.

Rolodex and Domain Expertise
If a candidate sells you primarily on their “Rolodex”, chances are they are full of it.  The best VCs have senior management or board-level contacts at prospective customers and partners and these VCs can be very helpful in opening doors for you.  However, salespeople – even very senior sales people – generally cannot.  Unlike someone focused solely on sales, VCs have multi-dimensional relationships with senior managers and board members.  In many cases, VCs are providing valuable information or other intangible assistance to these operators – there is an exchange of value.  In most cases, senior salespeople do not have the same type of multi-dimensional relationships and thus have fewer levers when calling in favors.  While a few career VP Sales have strong relationships that translate into sales, I find that those who boast “extensive Rolodexes” very rarely generate results.  Just because a prospect bought something from a VP Sales at her last startup doesn’t mean that the prospect will buy again on relationship alone.  Most VP Sales have one or maybe two relationships that can be counted on to generate sales.  So, while relationships certainly can be helpful, they should not be relied on – your VP Sales has got to have the leadership, energy and aggression required to open up new markets and secure new clients.

Trust and Judgment
This should be of obvious importance, but will your prospects, customers, and salespeople trust and respect this hire?  Will you have faith in his or her judgment and forecasts?  Let scuttlebutt inform your due diligence but trust your gut and ask for at least five references to confirm your impressions.

Career Track
I think the best VP Sales are the ones who aspire to be CEOs.  They tend to have a better appreciation for how the sales function fits into marketing, development, professional services, and support.  The challenge is to identify these folks earlier in their careers and that’s difficult to do.  In many cases, you’ll need to take a risk on someone who may not have “done it before”, but will crush it if you give them a chance.  Finding “stage-specialists”, for example those who have repeatedly taken companies from $3-5 million in revenues to $25 million, is difficult although these pros do exist.  Basically, you do best by hiring someone who has an ambition to achieve something larger (beyond simply a larger paycheck) and this ties back into finding someone who is a missionary and not a mercenary.  This last principle applies across every role in the organization – you want missionaries.

And if You Make a Mistake, Fix it Quickly
Lastly, it’s critical to correct bad decisions quickly.  If you have done everything to make a good hiring choice and your VP Sales is not working out, you need to part ways as quickly as possible.  This is true for any hire, but given the expense and impact of building a sales organization, it is particularly important.  Great sales teams energize and motivate an organization and bad ones poison the pool.

Reblog this post [with Zemanta]

The Data Storage Conundrum & Oscar Wilde

The Economist is a great “newspaper”, my favorite.  A couple weeks ago they did a special report on “The Data Deluge” which explored the recent and rapid expansion of data, and how to handle it.  There were two parts of the report that caught my eye because they seemed contradictory.  The first was an 1894 quote from Oscar Wilde:

It is a very sad thing that nowadays there is so little useless information.

To which The Economist added, “He did not know the half of it”.

The other part was this chart:

The question I have is, if there’s “so little useless information”, why doesn’t data storage more closely track data creation?  Wouldn’t one want to store and analyze all of this data if it were truly useful? It’s a pretty obvious but important question because the answers could tell us a lot about what we as a society think is valuable.  So what are some potential answers?

We Don’t Value the Data We Create Enough, So We Don’t Store It
Maybe we already store all the data we deem important enough to save.   Everything else is expendable. It’s not free to store data, so one has to weigh costs and benefits about what is kept and what is not.   It’s possible that this excess data doesn’t have any value, but I doubt it.

We Know the Data We Create is Somehow Valuable, But We Don’t Know How to Make it Valuable
This scenario argues for more statisticians or better tools to extract insights from large data sets.   Most data is unstructured and it takes specific expertise to organize and analyze it.  Generally speaking, it’s big companies that have the in-house skills needed to glean real value from these data.  Smaller- and medium-sized businesses generate plenty of data, too, but they may lack the resources and personnel required to make their data useful and actionable.  When a business decides what data gets stored and what sublimates, it will only spend money storing what is required to run the business and nothing more.

We Throw Out Old Data, So Available Storage Capacity Lags New Data Creation
Perhaps older data is perceived as less valuable (rightly or wrongly) and is discarded as it expires.  This “hole” in the bottom of the proverbial cup would account for the flatter growth in available storage vs. information created.

We Can’t Make Data Storage Capabilities Fast Enough to Store the Data
This would be a great problem for companies such as EMC to have, but it’s just not the case.  It’s becoming less expensive to store more data. Kenneth Cukier points out that companies such as Wal-Mart store more than 2.5 petabytes (the equivalent of 167 times the books in the Library of Congress) of data, fed by more than 1 million customer transactions per hour.  Facebook stores over 40 billion photos.  Guaranteed that Facebook’s “available storage” curve closely hugs its “information created” curve because obviously Facebook sees economic value in storing its users’ data.  It’s a safe bet that Facebook’s “available storage” curve is actually above its “information created” curve since FB probably has at least two mirrors for each piece of data.

There’s no doubt that data is becoming a more valuable commodity, even to businesses that have traditionally been less data-intensive than the Facebooks of the world.   The bottom line is that it’s relatively expensive to store data (vs. discard it), so we need to have a good reason to store it.  Perhaps the solution is to create better tools that can make data more useful for people who lack interest or training in statistics and data mining.  This may be another aspect of The Facebook Imperative that Marc Benioff recently wrote about.  Companies such as Oracle, SAS, SAP, Salesforce, and Tibco already offer software tools to help make data more useful, so there’s got to be something else pulling down the growth in data storage.  Maybe there’s just a lack of will to implement and use these tools?  What do you think?

Reblog this post [with Zemanta]

Will Apple Bet the Farm on Quattro Wireless?

Apple‘s recent purchase of mobile ad network Quattro Wireless may signify a much more significant shift in Apple’s business model than it would otherwise seem.

So, why did Apple buy Quattro (Apple earlier sought to buy market-leader AdMob, but ceded the purchase to Google once the price became too rich)?  Apple has always focused on providing well-designed, tightly-integrated software and hardware to customers willing to pay a premium for these qualities.  The focus has always been profits over an indeterminate quest for market share and that strategy has proved very durable.  With that in mind, my guess is that Apple will initially use Quattro to better monetize the large number of free apps (perhaps 9:1, free:paid) in the App Store.   While free apps help iPhone/iPodTouch sales by making the devices more useful, Apple’s 30% take on free app sales is still $0.  Beyond iPhone/Touch, being able to monetize content via an ad-supported model will become more important as publishers begin to distribute content on the iPad.  While the iPhone/Touch/Pad SDK enables app developers to charge for incremental purchases within apps, Apple will need an ad platform to satisfy the needs of various publishers, particularly on the iPad.   These are all practical tactics that make a lot of sense in the context of Apple’s strategy to monetize “closed” platforms that benefit from tightly integrated software and hardware.

Apple’s critics have faulted the company for not being more “open” with the iPhone/Touch/iPad OS (“open”, meaning device agnostic, with no app approval process).  Critics say Apple is making the same mistakes today as it did during the OS wars.  The battle then was Apple’s “closed” model that exclusively paired Apple software to Apple hardware, versus Microsoft’s decision to allow its software to run on any hardware device (with certain controls).  We all know the result: Microsoft has something like >85% of the OS market vs. ~10% for Apple.   The argument is that Google’s “open” Android platform will eat Apple’s lunch just as Microsoft did, and Apple will be relegated to distant second place in mobile.

Others argue (using Clayton Christensen’s theory) that Apple does not need to open up since customers will continue to value higher-performance mobile devices over lower-priced commodity ones for the next decade or so. It’s hard to argue against this, but it is difficult to time innovation to anticipate customers’ needs, especially when you’re targeting global markets each with unique demand.  More importantly, the competitive attribute may not be device performance, but app costs (i.e. look at the substitutes: free turn-by-turn GPS on Android vs. $59.99 for TomTom‘s US GPS app on iPhone).  Couple this with the fact that the iPhone’s gross margins are decreasing and that iPhones account for more than 30% (!) of Apple’s ’09 Net Sales, Apple may be in a tighter spot sooner rather than later.

Before Quattro, Apple’s mobile business model could not compete with Google’s ad-based model because Apple’s incentives to sell more iPhones and paid apps simply did not enable it to do so.  Google benefits from an open platform because it gives Google broader reach to sell more ads.  More importantly, as Bill Gurley points out, Android offers a “less than free” business model to carriers that want to license Android.  Carriers that license Android split ad revenues with Google, so instead of carriers paying to license an OS, carriers are getting paid to use Android.  And with traditionally expensive apps such as turn-by-turn navigation becoming free on Android (ad supported), it will be difficult for Apple to continue making money off of app sales commissions.

While Apple has remained “closed”, the Quattro purchase will enable the company to pursue a more open strategy that would enable Apple to benefit financially (ad-supported) from ubiquity.  A more “open” system would allow the iPhone/Touch/Pad OS to run on non-Apple hardware (managing this ecosystem would be more complex) and enable developers to launch apps more easily. Of course, this would go entirely against Apple’s long history of tightly integrating its hardware to its software, but Apple has done 180’s before (Perhaps in a calculated way.  Jobs once said something like, “nobody will ever want to watch video on a small screen” before they launched the iPod with video).  The decision to open up would look highly unlikely in the context of Apple’s recent decision to remove certain adult-themed applications from the App Store.  Nonetheless, while Apple is rightfully focusing on getting its phones on more carriers worldwide, Quattro Wireless could be the genesis of a more “open” (but bet-the-farm) strategy at Apple.

Reblog this post [with Zemanta]