Tag Archives: economist

Deconstructing The Economist

Ozzy Osbourne in 2010.
Image via Wikipedia

The Economist is my favorite newspaper.  It’s also one of the only profitable news publishers.  Why is it successful while the rest of the industry is gasping for air?  It’s not because they’re extremely innovative in how they charge for or deliver content.  They don’t have a splashy online presence although they slowly waded into crafting online-only content that takes advantage of the web’s interactivity.  They don’t have a fancy iPhone or iPad app yet.  The simple reason for their success is the content itself.

There is no close substitute for The Economist‘s product and so people are willing to pay for it.  They cover business and political news with a breadth and depth that is unequaled, and because they are a weekly, they can chew a bit more on their stories before publishing.  Most importantly, they are transparent about their views (socially liberal, fiscally conservative).  They take a side on every issue but deliver arguments in an even-handed way, enabling the reader to agree or disagree with the facts and data presented.  They don’t even publish bylines, making it all about what is written and not who has written it.  Since I like their style, and would love to read more news written this way, I thought I’d briefly deconstruct a typical article to see how their editors tick.

Each article’s headline is brief and opaque (i.e. “Race to the bottom” or “An empire built on sand“) or non sequiturs (i.e. “No, these are special puppies” or “Cue the fish“).  Sometimes the headlines read like the Old Spice guy wrote them (“I’m on a horse”), but that’s part of dressing up what could otherwise be dull subject matter.  The non sequiturs force you to read on just so you can understand what they’re talking about.

The sub-header is all business.  There’s no fat and they read like well-crafted tweets.  For an article on genetic testing, one reads, “The personal genetic-testing industry is under fire, but happier days lie ahead”.  The subheader is usually one or two sentences and plainly states the paper’s view on the issue (“Google has joined Verizon in lobbying to erode net neutrality“).  I really like this method because at bottom all journalists have an opinion, so why not be transparent and lay it bare?  By laying out the conclusion in a couple sentences upfront, it also allows the reader to get something even if she’s just paging through.

Many times, the lede introduces the article with a story or quote.  In the article about personal genetic testing, it starts with a quote from Ozzy Osbourne, “By all accounts, I’m a medical miracle”.  It gets the reader interested and humanizes what would otherwise be a purely technical subject.

The next paragraph or two include a strong argument in support of the subheader.  In the genetic testing article, the author writes about how Osbourne “is not alone in wondering what mysteries genetic testing might unlock”.  It goes on to say that Google is invested in a genetic testing company and that Warren Buffett himself has been tested.  It discusses the promise of people understanding their risk for getting a disease and how that information holds great promise for treatment.  At this point, the reader may be wondering how anyone could oppose genetic testing.

Not so fast.  In typical Economist fashion, the next paragraph eviscerates the thesis of the article.  While the subheader supported genetic testing, the next paragraph cites a damning report from a credible source that found many test results were “misleading and of little or no practical use to consumers”.  Game over.  At this point, the reader is scratching her head thinking, “I thought testing was supposed to be a good thing”?!  By so willingly presenting a counter argument, The Economist strengthens the credibility of the view it supports in the subheader.  It’s an effective writing tactic and their use of it has made me a more critical reader, forcing me to always search for the other side of the story.

The bulk of the article then presents a series of facts from either side, filtering each one through the views presented in the subheader.  The facts are laid out, but placed in context.  The reader can decide which facts are convincing and which might be discarded.

The final paragraph is almost never definitive.  It summarizes the main arguments and concludes with an open question of whether or not The Economist‘s view will come to pass.  It’s not that they waffle, they’re just practical.  And they don’t play the prediction games that all the cable news talking heads play.  I think it’s a refreshing approach that lets readers decide on the future for themselves.

Enhanced by Zemanta

The Data Storage Conundrum & Oscar Wilde

The Economist is a great “newspaper”, my favorite.  A couple weeks ago they did a special report on “The Data Deluge” which explored the recent and rapid expansion of data, and how to handle it.  There were two parts of the report that caught my eye because they seemed contradictory.  The first was an 1894 quote from Oscar Wilde:

It is a very sad thing that nowadays there is so little useless information.

To which The Economist added, “He did not know the half of it”.

The other part was this chart:

The question I have is, if there’s “so little useless information”, why doesn’t data storage more closely track data creation?  Wouldn’t one want to store and analyze all of this data if it were truly useful? It’s a pretty obvious but important question because the answers could tell us a lot about what we as a society think is valuable.  So what are some potential answers?

We Don’t Value the Data We Create Enough, So We Don’t Store It
Maybe we already store all the data we deem important enough to save.   Everything else is expendable. It’s not free to store data, so one has to weigh costs and benefits about what is kept and what is not.   It’s possible that this excess data doesn’t have any value, but I doubt it.

We Know the Data We Create is Somehow Valuable, But We Don’t Know How to Make it Valuable
This scenario argues for more statisticians or better tools to extract insights from large data sets.   Most data is unstructured and it takes specific expertise to organize and analyze it.  Generally speaking, it’s big companies that have the in-house skills needed to glean real value from these data.  Smaller- and medium-sized businesses generate plenty of data, too, but they may lack the resources and personnel required to make their data useful and actionable.  When a business decides what data gets stored and what sublimates, it will only spend money storing what is required to run the business and nothing more.

We Throw Out Old Data, So Available Storage Capacity Lags New Data Creation
Perhaps older data is perceived as less valuable (rightly or wrongly) and is discarded as it expires.  This “hole” in the bottom of the proverbial cup would account for the flatter growth in available storage vs. information created.

We Can’t Make Data Storage Capabilities Fast Enough to Store the Data
This would be a great problem for companies such as EMC to have, but it’s just not the case.  It’s becoming less expensive to store more data. Kenneth Cukier points out that companies such as Wal-Mart store more than 2.5 petabytes (the equivalent of 167 times the books in the Library of Congress) of data, fed by more than 1 million customer transactions per hour.  Facebook stores over 40 billion photos.  Guaranteed that Facebook’s “available storage” curve closely hugs its “information created” curve because obviously Facebook sees economic value in storing its users’ data.  It’s a safe bet that Facebook’s “available storage” curve is actually above its “information created” curve since FB probably has at least two mirrors for each piece of data.

There’s no doubt that data is becoming a more valuable commodity, even to businesses that have traditionally been less data-intensive than the Facebooks of the world.   The bottom line is that it’s relatively expensive to store data (vs. discard it), so we need to have a good reason to store it.  Perhaps the solution is to create better tools that can make data more useful for people who lack interest or training in statistics and data mining.  This may be another aspect of The Facebook Imperative that Marc Benioff recently wrote about.  Companies such as Oracle, SAS, SAP, Salesforce, and Tibco already offer software tools to help make data more useful, so there’s got to be something else pulling down the growth in data storage.  Maybe there’s just a lack of will to implement and use these tools?  What do you think?

Reblog this post [with Zemanta]