Why I Dumped my iPhone for Google Android

android apple switch

I bought an iPhone as soon as Steve Jobs started selling them in June, 2007. It was obviously the phone of the future. I sent my first one back because it struggled to manage email or calendars hosted by Exchange -- something my Blackberry did effortlessly. Within three months however, the iPhone was ready for prime time and I was hooked. I have owned nearly every iPhone since and naturally grew to regard Android phones as inferior wannabes.

The iPhone is the single most revolutionary piece of technology that I have ever owned. My 6+ was my constant companion: beautiful, fast, and long-lasting. Over the years iPhones have improved and I have grown accustomed to their idiosyncrasies. Until recently, I thought it was a stable marriage.

So it surprised me a bit that yesterday, I broke things off. A few days with Google's new Nexus 6p made me realize how much I was missing. The Nexus 6p has fewer irritating habits and is noticeably smarter and more considerate than my iPhone ever was. It may not strut a polished Apple body, but the camera, screen, and battery are solid and practical. The hardware has no shortcomings that I can find. With the Nexus 6p, Android has surpassed the iPhone because Google's latest mobile OS, dubbed M or Marshmallow is a smarter operating system. So I lobotomized my trusty companion and listed it for sale on eBay.

In many respects there isn't much to choose between them. Apple and Google smartphones are similar: both sport outstanding cameras. Both take full advantage of advanced battery extension technology, predictive features including apps you are likely to use next, fast and accurate fingerprint readers, and an extensive permissions architecture. The Nexus uses a USB Type C rapid charging technology, but Apple is introducing it as well.Apple has ForceTouch, which Google or others might copy if it turns out to matter. The companies copy each other's UI swipes, reveals, and taps; at the moment, Android's is newer and the UI is quite nice. For example, you can shut the phone ringer off for two hours while you watch a movie and it will come back on by itself (I could never do that on an iPhone, but Apple could decide to fix that tomorrow). Things like location based security mean that if you are home, the device stops nagging you for PINs, etc. Again, replicable by Apple. The Android OLED screen is more pixel dense and works outdoors -- but you have to master a few tricks to keep the battery from draining too quickly.

Other differences reflect Apple's control preferences, some related to security. Apple severely limits the ability of non-Apple apps to access third party apps. Android doesn't, so apps like Dashlane or LastPass can unlock applications as well as websites -- a huge advantage if you hate logging in all the time. Similarly, app linking means that if you click on a New York Times link, you go to the app, not to a web page that requires a login. Android has always had a user accessible file system, which makes some operations vastly easier. Apple relies on iTunes for access to media files -- and Apple has tortured and twisted iTunes for so many years that it now leaves all but the most rabid fanboys cursing their screens.

More notoriously, Apple does not let you control your default apps, so you either jailbreak your phone or live with Apple's choices. On an iPhone if you click on a date, iOS takes you to Apple Calendar even if you dislike the app. Click a link and hello Safari -- even if Firefox is your browser of choice. Click an address, it's Apple Maps, which may not be as horrible as it used to be, but is still not nearly as complete as Google Maps or Waze. If you want to play a game, you are part of Game Center, like it or not. Apple is very unlikely to fix these things because of their commitment to a controlled user experience that drives you to their apps. Android tries to accomplish the same thing by making the apps work together so well that you don't want to leave the Google ecosystem -- but you can leave any app if you want to. 

In Nudge, a classic book on choice architecture, Cass Sunstein and Richard Thaler make the case for "libertarian paternalism" -- a philosophy of giving users a nudge towards healthy choices but permitting any choice that does not harm others. Set the fresh fruit at eye level, but make the chocolate pie available for those willing to reach for it. Apple ignores the libertarian part of this principle -- it is all paternalism. You cannot even delete the Apple native applications from your device (something Apple seems likely to change, if only because there are now so many native apps). Marshmallow is just as paternalistic -- but at the end of the day, you can gorge on sweets if you are determined to do so.  

This is not a small matter, as is obvious the minute you try Google Now on Tap. GNT is TNT -- a Google technology so powerful that the Wall Street Journal called it Google's "nuclear weapon" in its fight with Apple. Google Now on Tap leverages Google's massive advantage over Apple in data and predictive technologies. It means that in almost any situation, Android gets you more useful data faster and more accurately. As the Journal concluded, "it’s hard to imagine how Apple could match it."

The results can be shocking. Google tries to figure out what you need to know or do next: create appointments, reserve a restaurant table (if eating out was in the email), or find an example from an artist if they were mentioned. It is highly addicting. Reading about Donald Trump? Tap the home key and Google offers up his Twitter account, details on his kids, or forthcoming campaign appearances in your area. (Not saying that you were following the Donald -- but you get the idea.) Google Now Tap also works in many third party apps.

Can Apple match this? Nope -- they deliberately don't gather much data about you, so they don't know you very well This badly limits Apple's capabilities. Google, on the other hand, gathers everything and knows you incredibly well. Apple knows this and has suddenly embraced privacy and ad-blocking -- in order to make a competitive weakness into a public virtue. (For the record, I commend their efforts -- and I am happy with the vastly more extensive privacy controls that Android offers).

But iOS users can simply download Google apps: Maps, Mail, Calendar, Photos, and Google Search (like Now) and get the same advantage, right? Not really. When the operating system can integrate these apps and apply predictive analytics across them, magic happens. This is one reason why Google's version of Siri (which needs a better name than "OK Google") is so vastly smarter and more accurate than Siri is: Google simply knows you better. The second reason is that the OS understands you better, mainly because for the past decade, Google has hired more PhD linguists than any organization on the planet. 

Will I miss Apple? A bit. I was sad to imagine saying goodbye to Apple Music, which for my money is simply brilliant. So I was stunned when Apple announced that they have made Apple Music available on Android. (In fairness Google Play also seems quite good -- I have not tested it and now don't plan to). 

In short, I dumped my iPhone for the same reason guys dump and get dumped every day. I discovered a solution that is both smarter and less irritating. If you put both phones side by side, I am confident that you would make the same decision.

LinkedInPrintFriendlyGoogle BookmarksGoogle GmailYahoo MailInstapaperPocket

Republicans Have No Head, but Democrats Have No Body.

juhasz art cropped

Last night's elections make clear that the ongoing comedy of a hapless GOP struggling to find a suitable House Speaker or Presidential candidate is a sideshow that continues to lull Democrats into complacency. As Obama takes his victory lap and Hilary does her on-deck stretches, Democrats would do well to consider:

So what? Well, as Matt Yglesias notes in a widely discussed post, elections have consequences. Which include restricting the rights of Democratic supporters. Voting rights face cutbacks in many states, unions face right-to-work laws in half of all states, and the right to abortion is under attack. As Sara Kliff has documented: "States passed a record 205 abortion restrictions between 2011 and 2013, more than the entire 30 years prior. As a result... 87 separate locations ceased to perform surgical abortions in 2013. These changes are a clear result of pro-life mobilization in the Obama era."

Well again, so what? The fortunes of our two political parties shift like tides. Political scientists have even called it "thermostatic", since the electorate is like a political thermostat, pulling left when conservatives are in power and right when liberals rule. Recent evidence that this is true point out that the biggest changes come in "wave elections". But we see no signs of a Hillary wave.  

Some (including me) worry that because state legislatures draw most legislative boundaries, conservatives will lock in permanent majorities by the dark art of gerrymandering. My argument, that the ability of voters to choose their leaders is fundamentally compromised if leaders can first choose their voters suffers from one distressing fallacy: a lack of evidence. The optics of gerrymandering appear worse than the results (which is not a defense of the practice). Redistricting does not, for example, explain how Democrats can win a majority of votes but a minority of seats -- as happens in recent elections. Many political scientists estimate that it produced at most ten Congressional District victories in 2012 -- not a powerful explanation for the GOP's 33 seat majority in the House.

What is to be done? Revitalizing the Democratic Party at the state and local level is challenging for several reasons (Tom Schaller makes the strong form of this argument here, although I don't find most of it convincing). 

Above all, Democrats should avoid focusing on the comedy that is the GOP leadership struggles or the drama of the Hillary and Bernie show. When the house fills with smoke, it is time to look up from the TV. 

LinkedInPrintFriendlyGoogle BookmarksGoogle GmailYahoo MailInstapaperPocket

Subsidize Motherhood, not Apple Pie

bernie hillary

Last night's Democratic candidate debate was more elevated but less entertaining than the GOP debates. Everyone loves a clown and, although he gave it his best shot, Lincoln Chafee is just not in Donald Trump's league as a buffoon. 

The question of paid family leave got a moment in the sun, with Bernie Sanders citing Denmark as his model and Hillary, ever the smart kid who does her debate prep, insisting that "we are not Denmark" before proceeding to recall her trials as the working mother of a sick infant and endorse everything Denmark does. With apple pie no longer debatable due to its high sugar content, this raises the question as to whether motherhood should be a political issue for 2016.

FT 13 12 11 parentalLeave

It should -- because we need more mothers and raising babies ain't ping pong as anyone who has tried knows. Since babies are pure positive externality, and apple pie promotes diabetes, let's cut the sugar lobby out and subsidize mothers, not apple pie.  

In 1993, during the reign of Clinton I, I worked in the Labor Department as we fought for and eventually passed the Family and Medical Leave Act (FMLA). The FMLA guarantees at least 12 weeks maternity leave to new mothers in companies with 50 or more employees. Half of all states have supplemented the FMLA by lowering the firm-size threshold to as few as 10 employees (14 states) or allowing longer absences (7 states). 

This is protected leave -- time that an employer must accommodate unpaid absence. California, New Jersey and Washington have enacted paid leave programs. Three other states and the District of Columbia guarantee mothers paid maternity leave through disability Insurance provisions.

Compared with other modern countries, the US is pretty hostile to mothers, as Bernie Sanders pointed out. The rest of the world welcomes new babies by giving their mothers some paid time away from work. Only the US doesn't.

Conservative economists have two worries about family leave programs -- and one looks valid. Notice that the graph segments into countries that allow about a year off and those at allow two or more years. Most of the countries allowing more than 150 weeks leave suffer from very low female labor participation rates. Women, like anyone else, will choose not to work if you pay them enough. 

Some conservatives also worry that paid leave somehow undermines family formation. It's an odd argument, since most conservative economists think that incentives matter and it is not clear how a public incentive to form a family would end up wrecking families -- as our marriage tax deduction recognizes. Academic research into welfare support for mothers concludes pretty forcefully that it improves family formation by increasing both marriage and fertility, which are both declining in the US. Parental leave is also associated with increased divorce, although one can question whether public policy should attempt to hold a marriage together so weak that it dissolves in the face of maternal leave. 

Bottom line: Americans can and should do more to support mothers. Where should our policy fit on the above chart? I'd say right around Denmark.  

LinkedInPrintFriendlyGoogle BookmarksGoogle GmailYahoo MailInstapaperPocket
Politics, WoW

Microsoft: Signs of Life. Google Chrome: Under Attack.

 badapp 2

Two decades ago, Steve Ballmer proudly showed his mother a copy of Windows 98, the new operating system that featured a "start" button so prominent that Microsoft bought the Stones "Start Me Up" for the launch. When Ballmer's mother asked him how to shut the program down however, Ballmer could have sung "you make a grown man cry" as he confessed that to stop Windows, you press the start button. 

Microsoft never understands the deep loathing that it inspires in those forced to use it. Its software is frequently ugly, as Steve Jobs memorably put it, "not in a shallow way, but in a deep way". It is badly architected, contains embarrassing UI goofs, and assumes that users will adapt to it -- which we of course do, by the tens of millions.

Silicon Valley has gone from fearing Microsoft as a monopolists to ignoring it as a zombie. Microsoft famously was so preoccupied with its Windows monopoly that it missed the rise of the Internet. It then missed mobile. Ballmer, suddenly awakened, frantically purchased Nokia, evidently on the theory that two stones sink slower than one. He launched Windows mobile in the charming hope that what iOS and Android developers, handset makers, and users really needed was a third mobile platform. With Bill Gates focused on philanthropy ever since the stock peaked and Ballmer treated as the clown prince of software, many technologists forgot about Microsoft for the last decade.

Which is why recent mobile software from Microsoft is so interesting. Although its new CEO cannot revive a dead man to the extent that "Start Me Up" suggests, Microsoft is nonetheless showing signs of life. The iOS version of Office is not only excellent, it is free. MS will require a subscription only for devices with screens over 10", so laptops and pro tablets will pay, but phones and normal tablets are free. And since a family of 5 licenses is only $9.99/month, MS is setting prices at much more realistic levels. 

On iOS, Office is now easier to use and better integrated with cloud storage than Apple's own Pages, Keynote, and Numbers. It has replaced native Apple applications on my phone and tablet. Outlook, which releases a trial 2016 version for OSX and Windows tomorrow, will hopefully move some of its mobile innovations to the desktop.  

1107 thumb chrome 100066234 large

As Microsoft comes to life, Google is suddenly facing a real threat. Chrome, Google's browser, owns about a two-thirds of all desktop traffic because it is free, stable, and fast. And because Internet Explorer, which Microsoft gives away, was for many years plain goofy. 

But desktop browsing is yesterday's market; today's game is mobile. We are approaching a time when every human on the planet will have a smartphone and billions of dollars will be made and lost in services and ads. Which is why a small feature of iOS 9, support for robust ad blockers, is a big deal. In my experience, Safari with ad-blocking enabled transforms the mobile web from endlessly irritating to quite usable. 

Apple did not enable ad blocking simply because they thought consumers would like it better -- although we do. They did it to stick a knife in the ribs of Google's only real source of revenue. They appreciate that Google will not happily enable ad blockers in Chrome. Indeed, Google has responded by disabling popular extensions that block ads, planning new subscription services for YouTube, and launching Google Contributor, which enables users to pay Google $2, $5, or $10 monthly depending on how much ad-blocking you want. 

Will this work? Will anyone use Chrome if it cannot block ads? Will Apple enable deep ad-blocking in Safari and host free user-generated videos to put a bullet in YouTube? Yeah, they will. I doubt that Apple will kill Google -- but I have no doubt that they will try and I have serious doubts that this week's campaign of moral pleading launched by Google will do anything but make them look pathetic.

A few years ago, Oracle CEO Larry Ellison called Google a "one trick pony", adding "but it's a hell of a trick". With Apple now exposing Google to a bit of full-contact capitalism, we will soon know how many new tricks Google can learn. 

LinkedInPrintFriendlyGoogle BookmarksGoogle GmailYahoo MailInstapaperPocket
Technology, WoW

Can Your City Be the Next Silicon Valley?

prairie 02

Cities around the world have declared their intent to become "the next Silicon Valley". New York's Silicon Alley, Austin's Silicon Hills, Portland's Silicon Forest, London's Silicon Square, New Zealand's Silicon Welly, Louisiana's Silicon Bayou, Israel's Silicon Wadi, Scotland's Silicon Glen, and Kenya's Silicon Savannah testify to the power of this idea. Promoters have even resorted to puns or worse, e.g., Santiago s Chilecon Valley, Philadelphia's Philicon Valley, and (you cannot make this up) Cape Cod's Silicon Sandbar.

But why not? After all, every city knows the key ingredients. Why shouldn't an ambitious town simply round up a bunch of entrepreneurs and venture capitalists, stir in some startup lawyers, accountants, and angel investors, recruit a bunch of engineers who want lower cost housing, and build ties with the local university? How hard can it actually be?

The fundamental confusion is between emergent systems that are organic, unplanned, and uncontrolled and engineered systems that are linear and guided. In their book Competing on the Edge: Strategy as Structured Chaos, Shona Brown and Kathleen Eisenhardt offer a useful metaphor: rebuilding a prairie. We know all of the ingredients of a prairie. We understand precisely the dozens of plant and animal species that comprise the ecosystem that once stretched from the Rockies to the Mississippi. They point out however, that even with perfect knowledge, if you were to acquire land near O'Hare airport, prepare the ground, and introduce the appropriate plants and animals, what you end up with would not be a prairie. Indeed, it would potentially be nothing like a prairie. (And yes, we have a Silicon Prairie, somewhere in Nebraska I think).

It turns out to be very difficult to re-create an ecosystem, even when we know all the ingredients. To start with, emergent systems are grown, not assembled. And they are not grown from scratch. The actual starting point matters because emergent systems are highly path dependent: past choices shape and constrain future ones. That means that simply introducing seeds and prairie dogs into an acre of land is more likely to result in a patch of weeds than "amber waves of grain".

Worse, we usually don't quite know all of the ingredients of most organic systems. Some are highly contextual (meaning your required ingredients and mine may vary) and some are contingent (they only work some of the time, mainly because our understanding of them is imperfect). It matters what sequence you introduce ingredients -- much like a souffle that collapses unless the beaten egg whites are added last. 

Technology regions and prairies are two examples of complex, emergent systems. There are many others, including companies and markets as well as governments and polities. As they grow in complexity, these organizations increase exponentially the number of components and the number of interactions between their components. They become more complex, organic, and self-organizing -- which means you cannot predict how these systems will evolve, much less reproduce this evolution once it happens.

Can systems like this be led? They can be guided successfully only by leaders with a deep appreciation of unintended consequences. In any emergent system, the second and third order consequences of any decision are likely to overwhelm the intended first order effects. You can even look at your life as an emergent system, which is why, as Steve Jobs famously noted, you can connect the dots into a coherent post hoc narrative looking backwards, but you cannot "connect the dots going forwards", i.e., predict anything very meaningful about your life long before it happens.

Douglass North is a wonderful economist who understands the organic and emergent nature of economic systems better than most of his fellow practitioners. He shared the Nobel Prize in Economics in 1993 for documenting the confounding role that institutions, culture, and history play in economic outcomes. Shortly after receiving this award, he was asked whether, since institutions matter so much, he had any advice for Russia.

He thought for a moment and replied: "Get a new history". That is one starting point for any city or region looking to start the next Silicon Valley. The other is provided by Emily Dickenson, who in 1755 had given a lot of thought about how "To make a prairie". Her famous verse, worth contemplation by the ambitious bees of the world:

To make a prairie it takes a clover and one bee,
One clover, and a bee.
And revery.
The revery alone will do,
If bees are few.

LinkedInPrintFriendlyGoogle BookmarksGoogle GmailYahoo MailInstapaperPocket

Is Detroit Moving to Silicon Valley?

f 015 luxury motion mercedes concept self driving car

As the value of an automobile moves to its electronics and cars become networked, self-driving computers, will the car industry move to the west coast? Consider:

Ben Evans at Andreessen Horowitz has given some thought to how cars are evolving. He makes several useful points:

  1. The current focus is on building an autonomous car that can drive down the street and not hit anything. This is the Google skill — and maps are critically important. Maps, he points out, have moved from discretionary accessory to vital necessity. 
  2. As autonomous cars grow, they are likely to be deployed as a service by companies that can optimize a fleet of autonomous on-demand cars in a city on a real-time basis. This takes NetJet type optimization skills and seems well-suited to Uber or Google. It also radically reduces the number of cars (and parking spaces) the world will need. The industry will shrink.
  3. The buyer and thus the car is likely to change — and this will change the cars. Cars will become simpler, partly because they are increasingly electronic. At first they will still have a steering wheel. Eventually however, they may not have brake pedals or windshield wipers. And if you are summoning a fleet car, they will be purchased based not on Tesla-elegant styling, but by corporate fleet managers, much like PCs are today. 

Even high tech cars will remain a large business. Indeed, that’s what attracts Apple and Google. There are simply not many industries left to revolutionize that can move the dial on companies with revenue measured in the hundreds of billions of dollars. Evans’ chart showing the place of the $1.2 trillion global car industry in the minds of technology CEOs is instructive. 

Evans on car market 1

As Evans notes however, this is today’s market — the one that looks to get smaller.

“Some analysts are talking about unit sales halving over time (with growing demand from China and other newer markets offsetting new technology). Meanwhile, moving to electric can reduce the price of a car, or of course (Apple’s preferred option) expand margins.”

He notes that even if Apple goes after the premium car segment, as seems likely,

“the bubble on the chart above shows Mercedes-Benz, BMW, Audi and Lexus, which combined sell 5-6m cars a year for $220bn in revenues (and so averaging $40,000 per car). That’s where Tesla is aiming now, and where one might expect autonomous cars to arrive first. For comparison, iPhone revenue in the last 12 months was $146.5bn….To look at that another way, if Apple created a car business as big as BMW and Mercedes combined, that business would generate less profit than the iPhone.” (my emphasis)

In short, yes — Silicon Valley will be the next Detroit. Seat belts advised — this will be a wild ride. 

LinkedInPrintFriendlyGoogle BookmarksGoogle GmailYahoo MailInstapaperPocket
Economics, Other classics, Technology, WoW

Top Ten Books of 2014

My reading for 2014 was more focused for the same reason my blogging has been reduced -- I have a day job. Also, two of the books were real projects, entailing runs to the sources, blogosphere, and Wikipedia for background, context, and understanding. These books were demanding (long and in places technical) and singular -- representing a new peak for their genre. I enjoyed both Thomas Piketty's 696 page Capital and the 21st Century and Andrew Robert's 976 page Napoleon: A Life enough to work through them both. Piketty is of course the French economist who appears to be famous everywhere except France (a problem he corrected this morning by refusing the Legion d'Honeur, declaring that "it is not the job of government to determine who has honor".  Spoken like a French historian). Piketty has wrestled more deeply and seriously with the problem of inequality than any previous economic historian. He actually has produced three books: an amazing history of inequality based on European census data rarely deployed for these purposes, an economic analysis of the causes of inequality. He argues that if r, the rate of return on assets, exceeds g, the overall rate of economic growth, family wealth will grow faster than the economy, and can become gigantic. It is a widely derided formulation, but well argued and defended. His third book consists of recommendations for taxing wealth that are largely laughable -- as he all but concedes. NapoleonRobert's book doubles as a history of 19th century France and, for that matter, Europe. Napoleon led not only a remarkable and consequential life, but one that shaped a great deal of modern Europe. He wrote some 30,000 letters during his lifetime, and Roberts is the first scholar to have access to the full pile, lucky guy. The book is compelling and very well-written. If anything, it needed to be longer. Both books are worthwhile investments, even if both will occasionally leave you screaming -- Piketty for what he fails to understand about business and technology, Roberts because his subject can be so simultaneously brilliant and boneheaded. (Moscow? Really?) I read from several other categories: books on startups, business and economic history, math and science books, and pulp detective fiction, especially noir. Here are the winners: Startups Running_leanEntrepreneurship is in the air -- often comically. Books that fanned the startup flames this year included Brad Feld's, Venture Deals: Be Smarter Than Your Lawyer and Venture Capitalist, the remarkable Ben Horowitz's The Hard Thing About Hard Things: Building a Business When There are No Easy Answers, and Sean Ellis' Startup Growth Engines: Case Studies of How Today's Most Successful Startups Unlock Extraordinary Growth. All three of these writers are deeply experienced at the trials of early stage companies and all have useful lessons to teach. Feld offers a field guide to the legal aspects of starting a company, to which I would say simply, read his book, then hire a competent lawyer. Horowitz writes excellent columns, which are less compelling when assembled into a book. Sean Ellis has put together a useful set of lessons around specific startups he has worked with. thiel_6_4_frontTop startup books of 2014 were Ash Maurya's Running Lean: Iterate from Plan A to a Plan That Works. This, finally, is the book that Steve Blank would have written in his groundbreaking The Four Steps to the Epiphany -- which instead emerged as one of the worst expressions of great thinking ever put to print. In most cases bad writing is the soft underbelly of bad thinking -- but Blank proves that sometimes you just need an editor. Eric Riess, who has built a franchise around "Lean Startups" by rewriting Blank and adding ideas like LAMP, does not come close to producing the how to manual for a startup team that Ash Maurya has written. This really is the go to book for early stage technology entrepreneurs. You won't get much practical advice from Zero to One: Notes on Startups or How to Build the Future by the estimable Peter Thiel. Parts of this book will delight you, and some of it should outrage you. This book derives from Thiel's widely followed Stanford course, which was capably blogged by his coauthor, Blake Masters. The book has moments of brilliance however -- notably his description of how companies actually compete and his relentless search for companies that go from zero to one (bringing something altogether new into the world) as opposed to from 1 to many (scaling up derivative businesses). Marc Andreesen, who needs to write a book and is smart enough to write a very good one, likes to say that Peter Thiel is always half right. Seems correct -- and the half that is right is also highly (Zero to One) original. Business History How_We_Got_to_NowNominations include the aforementioned Dr. Piketty, Anita Raghavan's The Billionaire's Apprentice: The Rise of The Indian American Elite and the Fall of the Galleon Hedge Fund, Bryce Hoffman's well-told American Icon: Alan Mulally and the Fight to Save Ford Motor, John Brooks', Business Adventures: Twelve Classic Tales from the World of Wall Street, Martin Wolf's The Shifts and the Shocks: What We've Learned-and Have Still to Learn from the Financial Crisis, Robert Litan's Trillion Dollar Economists: How Economists and Their Ideas Have Tranformed Business, William Rosen's The Most Powerful Idea in the World: A Story of Steam, Industry, and Invention, and Vaclav Smil's Making the Modern World: Materials and Dematerialization. everythingstoreRaghavan, Hoffman, and Brooks give us well-told tales. The disgusting and criminal behavior of my former McKinsey colleagues, the turnaround of Ford Motor, and the Bill Gates-endorsed collection of New Yorker business essays all make for great and educational reading. Martin Wolf and Robert Litan wrote exceptional books for those interested in financial economics and another post mortem on recent financial crises -- although I had quite my fill of those in 2012-13. And I have an obvious soft spot for the sort of well-researched, highly readable economic and technology histories that Rosen and Smil have written. In a competitive year, I pick Marc Levinson's The Great A&P and the Struggle for Small Business in America, Brad Stone's The Everything Store: Jeff Bezos and the Age of Amazon, and Steve Johnson's How We Got to Now: Six Innovations That Made the Modern World as the year's best books of business history. Levinson's book does not quite rise to the level of The Box, but is nonetheless a really wellgreatapcoverfinal told tale of retail innovation -- a story that Brad Stone picks up with the history of Amazon. Based on my own front row seat at some of the events he describes, Stone gets a lot of things right about Amazon's culture and history. It is an amazing company, even if not always an attractive one. Steve Johnson exceeded my low expectations for a book made into a PBS show and structured around seemingly random innovations. But the book works and brings with it more delightful insights per page than any in recent memory. You cannot know in advance how the sack of Constantinople will lead directly to telescopes but Johnson traces the path with confidence without inferring causality where none exists. Great read. Math and Science This year's contenders are Alan Lightman's The Accidental Universe: The World You Thought You Knew, John Brockman's Thinking: The New Science of Decision-Making, Jordan Ellenberg's How Not to Be Wrong: The Power of Mathematical Thinking, and Joshua D. Angrist's Mastering 'Metrics: The Path from Cause to Effect. SilvermanLightman is an unusual writer, as the first professor to receive appointments from MIT in both the sciences (he is a physicist) and the humanities. He offers looks at the universe from several perspectives -- not all equally successful. His opening chapter, entitled the Accidental Universe, is the strongest and by itself a remarkable read. Brockman is closely associated with the Edge, a foundation that brings together thinkers from a wide range of disciplines. His book touches on developmental in neuroscience, decision theory, linguistics, problem solving, and more, but consists mainly of unedited transcripts of informal discussions or presentations, presumably at Edge conferences he has hosted. This gives the book a stream of consciousness feel, and leaves it vulnerable to rambling, repetition, and superficiality. Ellenberg writes well and not especially technically. Fine book overall but, like Lightman, the first chapter (on lessons by Abraham Wald on the value of constantly looking for reasons why you could be wrong) has the strongest material. Angrist's book on econometrics was wonderfully organized and well written -- but the higher math got away from me. I liked it even if I did not understand it all. Best book in this category goes to Nate Silver for The Signal and the Noise: Why So Many Predictions Fail-but Some Don't. The book is not good because Silver famously called the Presidential election correctly -- it is a genuinely good summary of the science of prediction, which is a vital part of science and business, not just politics. But prediction is difficult, "especially concerning the future" as Niels Bohr famously noted. We fall prey to cognitive biases and often mistake noise for signal. Silver does an excellent job of walking a general reader through the swamp. Social Criticism china_airborneI am a sucker for books on causes or those written to expound a strong point of view. This year's pile included Adam Minter's Junkyard Planet: Travels in the Billion-Dollar Trash Trade, two books on the economics of higher education -- Elizabeth Armstrong's Paying for the Party and Joel Best's The Student Loan Mess: How Good Intentions Created a Trillion Dollar Problem, Megan McArdle's The Up Side of Down: Why Failing Well Is the Key to Success, Michael Pollan's Second Nature: A Gardener's Education, Jonathan Safron Foer's Eating Animals, and Steve Levitt's Think Like a Freak: The Authors of Freakonomics Offer to Retrain Your Brain. Minter is a great writer, as any New Yorker reader knows. Trash matters -- but ultimately not enough to keep me interested. The higher ed books both fail to segment the problem. Some higher education is really valuable and essentially self-financing and much is not. Average data tells you little, except that the debt problem is unsustainably large. McArdle knows her Dylan: "she knows there is no success like failure, but that failure is no success at all". She fails to deliver a book's worth of insights -- although she is a fine writer and terrific blogger. Pollan's book is great, period. Safron Foer somewhat crudely attempts to moralize factory farming -- a topic that others, notably Pollan, have addressed far more more effectively as an omnivore's dilemma instead of a vegetarian manifesto. Levitt's book is Freakonomics II -- good, clean behaviorist fun, just like the first one. CowenThe winners in this category are James Fallow's China Airborne and Tyler Cowan's Average Is Over: Powering America Beyond the Age of the Great Stagnation. Fallows is a great essayist and social thinker -- you can learn from almost anything he writes and when he combines his love of China with his love of airplanes, run, don't walk to read this book. Tyler Cowan, whose Marginal Revolution blog is indispensable to economic thinkers, has exposed the forces that underlie as much of the inequality problem as r>g -- that in many fields, a huge share of the income goes to the top talent and that technology appears to make this problem worse, not better. Pulp Fiction eisler-novels-keanu-740x400Now to the fun stuff. I binge read police procedural and detective fiction, especially noir. In earlier years, I pigged out on the complete Raymond Chandler, Lee Child, and Michael Connelly. This year I happily read the complete Barry Eisler, whose ten or so John Rain novels, mostly set in Tokyo, are terrific escapism and helpfully move noir fiction out of Los Angeles. Rain is an attractive character (evidently to be played by Kenau Reeves in forthcoming movies) who combines the mandatory brooding nature, love of scotch, jazz and beautiful women with a wonderful introspection and deftness at the killer's craft. Eisler is, of all things, an accomplished Silicon Valley attorney with an obvious love of Japan and the genre. Highly recommended. (Note that Eisler decided unhelpfully to rename all of his novels, so they all have new pub dates and you will need to search a bit to figure out the sequence, which matters. Pro tip to Barry: next time, number the titles).

LinkedInPrintFriendlyGoogle BookmarksGoogle GmailYahoo MailInstapaperPocket
Media, WoW

On a City Bike, all Traffic Lights are Yellow

11949849782053089133traffic_light_yellow_dan_01.svg.hiIn both the US and in Europe, the use of bicycles in cities has shot up. According to the League of American Bicyclists (which endorses none of what follows), bike use has gone up 39 percent nationally since 2001. In the seventy largest US cities, commuter bike use is up 63 percent. Leading the pack is San Francisco, where I bike to work most days; Chicago, New  York, and Washington have also seen huge increases. European cities, which generally had a head start, have also seen an increase in bike commuting. The growing ubiquity of City Bikes (public rental bikes designed for short urban commutes. In London, "Boris Bikes" after the mayor who sponsored them.) has accelerated this trend. Cycling is safe. Mile for mile, your odds of dying while walking or cycling are essentially the same. Surprisingly, even with more cyclists on the road, fewer cyclists are getting killed by cars.  From 1995 to 1997, an average of 804 cyclists in the United States died every year in motor-vehicle crashes. During an equivalent three-year period from 2008 to 2010, that average fell to 655. (The number rose again in 2011; not clear why). Credit does not appear to be because of bike helmets, which continue to generate serious debates. On balance, they seem to prevent death from skull fractures but do little to prevent brain injury from concussion. Traffic laws that slow cars down make a big difference. According to the Economist, dying while cycling is three to five times more likely in America than in Denmark, Germany or the Netherlands mainly due to cars traveling more than 30 mph. Europe frequently has traffic "calming" laws to slow cars down when bikes are nearby. It also helps pedestrians: A pedestrian hit by a car moving at 30mph has a 45% chance of dying; at 40mph, the chance of death is 85%, according to Britain's Department of Transport. The British seem to gather better national data on cycling accidents than anyone else, although they appear to be far worse statisticians (they unhelpfully conclude that most bike accidents occur during those times when people ride bikes, for example). Nonetheless, they document a finding that will surprise no experienced urban cyclist: "Almost two thirds of cyclists killed or seriously injured were involved in collisions at, or near, a road junction..." In other words, cars kill cyclists at intersections. Knowing this, the single largest safety priority of every urban cyclist must be to avoid cars where possible; yield to them where not. Making this your number one safety priority  brings with it some surprising implications. In short, when biking in a city, all traffic lights are yellow. Avoiding cars means stopping on green when you must and going on red when you can.  
LinkedInPrintFriendlyGoogle BookmarksGoogle GmailYahoo MailInstapaperPocket
Media, Other classics, WoW


Blue-Jasmine-offIn Blue Jasmine, Cate Blanchett delivers what may be her strongest performance yet. Her frenetic, scheming, recast Blanche Dubois tops even her jaw-dropping performance in I'm Not There -- the 2007 movie where she plays a highly plausible Bob Dylan. Jasmine is married to Hal (Alec Baldwin), a sleaze-ball Wall Streeter who delivers the requisite mansion, Hampton home, and yacht. When Hal is exposed as a Bernie Madoff style crook, he goes to prison and Jasmine goes to live with her working class sister in San Francisco. In a clear homage to Tennessee William's Street Car Named Desire, nothing goes especially well after that. The movie is in most respects brilliant. Allen is an active director, his camera everywhere, his cuts clean and well-considered. It is hard to argue with the verdict of The New Yorker's David Denby, who pronounced Blue Jasmine "the strongest, most resonant movie Woody Allen has made in years." Sally Hawkins and Andrew Dice-Clay deliver solid performances as Jasmine's adopted sister and her husband who never forgives Hal for costing them the only money they ever had. At one level, Blue Jasmine is another in the latest of Allen's attempts to get out of Manhattan that started with Midnight in Paris and continued with To Rome, With Love. I hope that Parisians and Romans did not cringe at the complete mess Allen made of their city, the way most Bay Area residents will at Blue Jasmine. Woody Allen appears to have not progressed in his view of California since Annie Hall in 1977 where is he famously puts down Los Angeles by declaring to his best friend Marty that "I don't want to move to a city where the only cultural advantage is being able to make a right turn on a red light." The working class heros of Blue Jasmine are not San Franciscans -- they are from Jersey. In the Bay Area, car mechanics have foreign accents (Mine is Yemeni, but you can just about pick your country). Nobody here debates where the best clams are -- that happens in New York and Boston. Even in New York, State Department officials have not had $10 million homes since the 1930s (Dwight Westlake, played by Peter Sarsgaard, should have been a venture capitalist). Dr. Flicker would have been a Cal grad and second generation Chinese. There is no Post Street entrance to the jeweler Shreve & Co. -- Dwight was walking into a wall (or was that the the point?). The produce even in a working class grocery store would have been ostentatiously organic -- although having an Indian owner was a nice touch. The custom Karl Lagerfeld designed Chanel clothing would have been made to look even more out of place (in truth, Blanchett wore it magnificently). Allen would not have closed the film in South Park with a shot that could have been Central Park. Or maybe he would have. Maybe a bit of Manhattan has crept into San Francisco and in some places more than a bit. But then why would Jasmine move west? She coulda stayed in Brooklyn. Or better, New Orleans.
LinkedInPrintFriendlyGoogle BookmarksGoogle GmailYahoo MailInstapaperPocket
Media, WoW

Early Warning Signs


Which one is the applicant?


Business competition is always interesting, in part because smart companies figure out how to avoid competition by specializing and differentiating their product or service. When Sun Tzu admonished his generals against assaulting walled fortresses, he understood that head-to-head competition is a sure path to a headache. Many US universities have not read their Sun Tzu: they compete head-to-head for the same students. In normal markets, schools would specialize. Some would seek students with strong quantitative skills, others would focus on training people who are especially empathic. Some might cater to students who write well, or are poor, male, female, interested in fashion or language studies, or born in another country. This happens of course (except for the male part: the US has no men's colleges left and only nine all women's colleges), but most colleges recruit for the same student profile: high grades, high test scores, compelling outside activities. They are assaulting a walled fortress. What accounts for their failure to differentiate? One problem is that universities are lazy: they compete for those who need them least. No university seeks out or even really wants people who most need education. They  seek students who will be successful even if the university is not. I wrote earlier about selection effects: the tendency of elite universities to compete for students with traits that strongly predict future success regardless of education. When those young people proceed to be professionally and often economically successful, their alma mater is always there, hand outstretched, with a gentle reminder of their formative influence. Employers, desperate for a shorthand method of segmenting talent markets, reinforce these effects by preferentially hiring graduates of "good" colleges. Pretty soon, your college becomes a critical part of your personal brand. It's a racket, and one in which I enthusiastically participate, benefit from, and perpetuate as a parent, student, employer, and advisor to university leaders. Selection effects lead to a second market failure: universities don't scale. What other business deliberately limits access to a compelling service? Can you imagine a law firm declaring that they would only accommodate the first 100 clients? Unthinkable -- they will grow to meet demand and maybe a bit more. For top universities, being selective is not a necessity, it's a choice. Most elite schools admit about the same number of students today as they did 100 years ago -- that's what makes them elite schools. As my younger son starts to think about college, I have begun to pay attention to how colleges are thinking about him. He will be a major catch (translation: they will compete for him because he shows every sign of being a kid who will do just fine in life with or without their help). So how do colleges compete for talent? In particular, how do colleges compete for the students that they all think they want?  When competing for talented high school students, universities worry either about their selectivity or their yield. Selectivity is admissions/applicants. Yield is enrollment/admissions. To boost selectivity, you do more marketing to increase the number of applicants. But to boost yield, you actually have to improve your school. Boosting yield is really hard in a competitive market, which is why yield drives college rankings (which improve yield -- a longer story). US News, the FT, the Economist and others that have jumped into the college ranking game realize that yield is a very strong market indicator of quality. After all, a school that could admit 1,000 students and then enroll all of them would have to be seen by every student as their very best choice. Except that most schools cheat. To avoid head-to-head competition, most universities schools offer students the following deal: don't force us to compete and we will give you a leg up. They call it early decision but they should call it yield improvement. They tell students that if they apply early to their school only, the school will lower the admissions bar. Careful research suggests that students who apply for early decision receive an advantage equal to an extra 150 points on their SAT score. (Universities deny this, but the numbers are unequivocal.) Colleges enforce severe penalties against students who are admitted early and do not enroll by colluding to blacklist the offending student -- a practice that should arguably be challenged in court. Early decision is marketed as a way to reduce the stress of applying to a dozen colleges -- and it does that. But it has a benefit that few seem to have noticed: it boosts the school's yield. Every student admitted under early decision programs will attend: that's the deal. The yield on early decision admissions is 100% -- small wonder that they are growing as a share of total admissions. Nor is there anything wrong with this: it allows applicants into better schools than they would on average get into if they applied later. Businesses do similar things all the time, ranging from no-shop agreements during M&A discussions to exclusive distribution deals in exchange for preferential pricing. Negotiated exclusivity is a battle-tested element of many walled fortresses.  In the last decade however, the most selective schools have started to rethink early decision. They have decided to compete on selectivity by removing exclusivity. They say to students: "apply early, get an early decision  and you are not bound by our offer". Today Harvard, Princeton, Yale, Chicago, Stanford, and MIT offer nonbinding "early action" programs and a handful of other schools do as well. These schools realized that they were the top choice for the overwhelming majority of those they admit -- they already had great yields. So they decided to increase their selectivity by signaling that they want every strong student to apply. With no reason not to take a shot at it and no obligation to attend, applications skyrocketed and schools that were already preposterously selective became even more so. The result of these two strategies is exactly what you would expect: applications have skyrocketed, driving admission rates down. Acceptances went out today and this year, Yale accepted fewer than 7% its applicants, the lowest acceptance rate in its history. It offered 1,991 seats to 29,610 applicants for an entering class of about 1,300. Harvard admitted 5.8%, Princeton 7.3%. There are actually three reasons that Ivy League applications are up. First is early action. Second is the Common Application, an online form that makes it easier for students who do not apply for early decision to apply to many more schools than they used to. When Harvard or Stanford say that they are twice as selective as they used to be, remember that each student they consider is also applying to twice as many schools. When I was seventeen, I applied to four schools and hand typed each application. Few kids today who are serious about college apply to fewer than eight and many apply to more. When the music stops, most kids who have prepared themselves for college still end up with a chair. The third reason that Ivies are attracting interest is more mundane: they pay better. Harvard, Yale, Princeton and Columbia are "need blind" schools, where the ability of a student's family to pay isn't considered by the admissions office. Harvard has announced that it will boost its financial-aid budget to $182 million, a 5.8% increase. For many students, it actually costs much less to attend an expensive, elite school than it does a local state university. The trick, of course, is getting in. Over the next decade, early decision will not solve the core problem facing most non Ivy universities: they are a lousy investment. Universities operate medieval business models designed for a core purpose that has disappeared. As late as the 1970s, universities accounted for a huge share of knowledge creation, storage, and transmission. They earned the right to certify who was smart and talented because they held a near monopoly on skill and knowledge. This monopoly has now vanished as knowledge creation has diffused and fragmented, information storage has become free and ubiquitous, and skill transmission has taken many forms -- some teacherless. The main privilege that colleges cling to is certification -- and that too is increasingly under challenge. The assault comes not mainly from online education, which has been widely discussed and over-hyped, but from enterprises that treat education as a business opportunity. For example, the MBA with the fastest payback now comes from not from a college but from Hult International, which was tiny five years ago and is now the largest producer of MBAs in the world. 2U  now builds large scale, fully credentialed online degrees that produce thousands of graduates each year in partnership with major universities. The Mozilla Open Badge Initiative and dozens of social startups want to supplant traditional degrees with more specific and timely credentials. There are literally hundreds of enterprises devoted to disrupting higher education -- whose walled fortresses will not withstand the siege for long. Education mattered to me, it matters more to my kids, and will matter even more to my grandkids. But thanks to competition and innovation, it will cost less, deliver more, and signal actual capabilities much more precisely than today's universities do.
LinkedInPrintFriendlyGoogle BookmarksGoogle GmailYahoo MailInstapaperPocket
Economics, Media, Politics, WoW


francisReligion seems to help a lot of people. It reinforces admirable values and provides community, kinship, charity, music, a moral compass, and stories. But to me, religious faith is stone soup: like the soldiers whose boiling rocks induced villagers to donate carrots, potatoes, and meat until there was a plentiful stew after the stones were removed. We could worship an old bicycle chain instead of an almighty deity if it brings us together to do useful work and help the least fortunate among us. Come to think of it, there are plenty of reasons to prefer the greasy chain over dead guys in white robes. My distrust of superstition has always meant that God and I are not on speaking terms. He presumably finds my lack of gratitude amusing. So we leave each other alone. I also think that despite its good works, the cost of religion usually exceeds its benefits. For millennia, religious intolerance has been a global scourge that has led millions to a life of hatred and violent death. Churches exploit the poor more often than they help them, just as they abuse the emotionally vulnerable more often than they comfort them. Religious leaders happily frighten children into believing that they will burn in hell forever if they don't submit to church authority and doctrine. This is designed to terrify the weak into compliance -- it  is nothing like parents fibbing about Santa Claus. Most religions oppress women, perpetuate outrageous sexual myths, and demand sexual conformity, but if hell exists, it surely has an especially warm corner reserved for the modern Catholic church, which abuses children on a horrific scale. That thousands of priests swore an oath to chastity before sodomizing more than ten thousand Catholic boys and (20% of the time anyway) raping Catholic girls is both horrifying and outrageous. As with all great crimes, we will never be able to fully count all of the victims. The governing body of the church admits that more than 3,000 priests have been accused of sex abuse during the past 50 years. In the US, more than 3,000 Catholics decided to speak out, lawyer up, and file lawsuits as they fled the church. The church has paid out between $2-3 billion to victims to settle these claims, depending on who is keeping score. Eight diocese have gone bankrupt due to an inability to pay these settlements. One widely cited count documents 6,115 priests who stand accused of sexually assaulting 16,324 minors. The actual account may be a fraction of this or it may be a multiple. What we know for sure and the Vatican concedes is that child abuse is simply not reported in much of the third world. If it were, one doubts that the church would still be growing there (watch the Philippines, where people are finding their voice and beginning to accuse predatory priests. The spread of Catholicism has been stopped cold). It was thus with a jaded eye that I watched the cardinals assemble this week at the Sistine Chapel. Like everyone else, I was surprised to hear of white smoke on the second day of the conclave. I was moved to see that the man who emerged wore not the traditional large ornate cross of gold, but a simple cross of wood. The cardinals had chosen a man who as Cardinal had refused his palace and limo and who, upon his elevation, refused to ascend the papal throne. Instead, he greeted the cardinals standing up, as brothers. I was stunned to hear that Bergolio chose the name of Francis after Assisi, who renounced his wealth, lived with the poor, founded the Franciscans, and spent a lot of time protesting outside the Vatican. Francis of Assisi was never ordained as a priest, much less a pope and as Robert Francis Kennedy noted, is a name associated with the quest for social justice, not the papacy. In his first talk, Francis spoke simply and generously. He asked that people pray for him, not to him, as most popes do. My antipathy to the church aside, this had to be good news. The world was stunned that an Argentinian had ascended to the papacy. This is not really surprising, since half of all Catholics now live in Latin America. Shocking to me is that the cardinals had chosen a Jesuit, the order that is a century-old pain in the Vatican ass. Jesuits are an intellectual order that values study and critical debate. They are big in the US and founded some of our best high schools and colleges, including Santa Clara, Boston College, and Georgetown. Jesuit priests take vows of poverty, which most gold-bedecked Cardinals think is beneath them. They are famously disrespectful of authority, challenging the Vatican on contraception, abortion, gay marriage, the role of women, the need for political revolution and all manner of causes (to be sure, not all Jesuits dissent on all of these issues. Francis appears to conform to Vatican thinking on all of them). Throughout the ages the Vatican has kept its distance from the Jesuits. Despite their size and influence, no order has been cast out further from the center of Vatican power. The cardinals of Rome are about as likely to name a Jesuit pope as the United Nations is to name a North Korean Secretary General. Indeed, the Vegas oddsmakers did not even have Bergolio on their list. When in his first words, Francis noted that the cardinals had "reached a very long way" to find a new pope, most people assume he was referring to the distance from Rome to Buenos Aires. He could as easily have been describing the reach it took for his fellow cardinals to support a Jesuit pope. I confess to a soft spot in my heart for Jesuits because so many of my best teachers were of that order. In high school and in college, every single teacher who challenged me to think deeply and critically and to live a moral life was either Jewish or a fallen Jesuit. Three were former priests who decided that a vow of chastity was nuts. Will Francis liberate the church? Nope. He is theologically conservative even if he modernized the Argentinian church. He made some dubious compromises with the military junta that will become public knowledge in the coming months. More fundamentally, he faces an unimaginable turnaround challenge rooted in both sex and money. The ongoing financial corruption of the Vatican's Curia is deep and entrenched. Organized pedophilia and its cover up could kill the church, and arguably should. The average age of priests increases by about 10 months per year. Still, I wish Francis well, even if I would not miss his church if it vanished. I sympathize with those who have trouble kicking the Catholic habit, and for them, and for children who are forced into religious life before they can make a choice, I truly hope that Pope Francis fulfills the promise of his first glorious hours.    
LinkedInPrintFriendlyGoogle BookmarksGoogle GmailYahoo MailInstapaperPocket
Media, WoW

Statistical Storytellers

We now collect extraordinary amounts of data from weblogs, wifi sessions, phone calls, sensors, and transactions of all kinds. The quantities are hard to imagine: we created about five exabytes of data in all of human history until 2003. We now create that much data every two days. Deriving insights from this mountain of data is a big challenge for most organizations. People who are good at this are highly valued and in desperately short supply (career tip: study statistics). But mining data for insights is just the beginning: explaining what you have learned to non-statisticians is often an even bigger challenge. Visualizing data and forming it into a coherent story is a completely separate skill, and one that is evolving quickly. For my money, the pioneers in the field are McKinsey's Gene Zelazny and the always impressive Edward Tufte. Modern masters include Hans Rosling and Garr Reynolds. It’s not easy to visualize data effectively and it is even harder to weave it into a compelling story. Statisticians make lousy novelists -- and vice versa (career tip: study fiction). It often takes a team to do a great job of analyzing and creatively presenting information. Done well, the results are artistic, for me anyway. Here are two good examples. The first is on wealth and inequality in America -- not an easy topic to portray. The second is a Hans Rosling's classic TED Talk on global demographics.
LinkedInPrintFriendlyGoogle BookmarksGoogle GmailYahoo MailInstapaperPocket
Economics, Technology, WoW

Denial is Bigger than Amazon

Pile-of-books-006Talk to a book author, publisher, or retailer about the future of their business and the denials begin. "Never before have there been so many good books to read". "Books are the backbone of civilization". "Life without books is unimaginable". As a cultural argument, this may be true, but as an economic one, this is a view only of the supply side. An alleged ancestor of mine once observed that "facts are stubborn things". Forget for a moment about our beloved books. Imagine a product you care little about: chemicals perhaps, motorcycles, or furniture. Imagine the industry that produces this product: the designers, manufacturers, marketers, distributors, and retailers. You can determine the health of this industry with a few key measures. We can apply these same vital signs to the publishing business to determine whether our loved one has long to live.

Sales. Not every industry with declining sales is in trouble, but most are. According to BookScan, adult nonfiction print unit book sales peaked in 2007 and have declined each year since. Retail bookstore sales peaked the same year and have also fallen each year according to the U.S. Census Bureau. I sold an online book business I had started at about that time for a simple reason: I couldn’t figure out how to keep growing it. eBooks are growing fast but do not close the gap. Print sales dropped 17% from 2010-2011, and e-books grow 117% (I have borrowed liberally from a nice fact set gathered here). The result was a 5.8% decline in total book sales according to the Association of American Publishers. Combined print and e-book sales of adult trade books fell by 14 million units in 2010, according to the Book Industry Study Group.

Unit economics. OK, but you can make good money in shrinking industries. You may sell fewer items, but make more on each item you sell. So long as you add more value than cost, customers will happily pay you for your product, even in a shrinking market. So what is happening to the unit economics of books? Start with prices. Book prices have gone up every year for more than ten years  -- a good sign, right? Not necessarily, because the number of books sold continues to shrink as noted above. But the number of books published is exploding. Bowker reports that over three million books were published in the U.S. in 2010. 316,480 of these were new traditional titles – meaning that publishers introduce 867 new books every day. But that is the traditional tip of the publishing iceberg: 90%, or more than 2.7 million, books published were “non-traditional” titles in 2010. These are mainly self-published books, reprints of books in the public domain, and resurrected out of print books. They vary enormously. Some become best selling light porn for housewives. Others are spam created by software that pirates an existing title in a few hours (one professor wrote a program that has produced 800,000 specialized "books" for sale on Amazon). Others are highly specialized books not attractive to a publisher. When Baker and Taylor reports that book prices are going up, they are describing traditional, not “non-traditional” publishing. pile-of-books 2 What about costs? The cost of bringing a traditional book to market is high and largely fixed, so the declining unit sales of the average book is a huge problem. According to BookScan, the best count we have, Americans only bought 263 million adult nonfiction books 2011, meaning that the average U.S. nonfiction book now sells fewer than 250 copies per year (collectors of first editions take note: it is second editions that turn out to be rare!) Only a few titles become big sellers. In a randomized analysis of 1,000 business books released in 2009, only 62 sold more than 5,000 copies, according to the New York Times. So we have an industry that is shrinking and being divided among many more products and players, few of which can make money. Not a good sign.

Marketing and distribution. Well, maybe better marketing and more rational distribution can target new customers and grow demand while reducing costs. It happens in hard goods and industrial businesses all the time. Can we fix the book market? Investments in marketing are very tough to justify. A publisher needs to acquire the book (pay an advance to the author), then develop (edit) it, design, name, print, launch, distribute, warehouse, sell, and handle returns (about a quarter of all books flow backwards from retailer to distributor or publisher -- a huge cost avoided by eBooks). After all of this, the average conventional book generates only $50,000 to $200,000 in sales, which radically limits how much publishers can invest in marketing. Increasingly, publishers operate like venture capitalists, putting small amounts of money to work to see what catches on and justifies additional investment. Only proven (or outrageous) authors attract large marketing budgets. Increasingly, book marketing is done by authors, not publishers. But how does an author market a book? The only way she can: to her friends and community. With too much to read, we read what our friends advise (women especially read with their friends. Publishers are intensely interested in what reading groups choose to read in an era where there is no general audience for nonfiction and fiction is highly segmented). Some products catch on and a few become blockbusters -- even some books that begin as unconventional titles without publishers. So marketing is tough to fix -- how about distribution? It’s a disaster. Retail is hopeless: your chances of finding any given book in a bookstore are less than 1%. For example, there are about 250,000 business titles in print. A small bookstore can carry 100 of these titles; a superstore perhaps 1,500. Stores are for best-sellers – if there are any. Online can carry every title, but really, who needs 250,000 business books? Online selling solves this problem, but enjoys such massive returns to scale that concentration is unavoidable. In the latest count, Amazon had a 27% share of all book sales (including, I estimate, about two thirds of all eBook sales -- meaning that they monopolize the only part of the book business that is growing). pile of books 3Underlying demand. OK, fine -- the industry is broken. But music was busted too: retailers evaporated, product proliferated and went digital, the label's value-added shrank, and piracy was a much bigger issue than in books. And despite it all, music is making a comeback: sales are up if you count concerts and ring tones. A few people make a living at it and a few become stars. Is this the future of books? It doesn't look that way for reasons articulated by Steve Jobs: although people still listen to music, many people have simply stopped reading books. Speaking about the Amazon Kindle, he argued:

It doesn’t matter how good or bad the product is, the fact is that people don’t read anymore. Forty percent of the people in the U.S. read one book or less last year. The whole conception is flawed at the top because people don’t read anymore.”

Ouch. Setting aside Jobs known tendency to dismiss technologies that he later pursued, is demand for books actually dropping? Are we even reading the books we buy? The evidence is overwhelming that we read fewer books than we used to. As summarized nicely in the New Yorker, the National Endowment for the Arts has since 1982 teamed with the Census Bureau to survey thousands of Americans about our reading. When they began, 57% of Americans surveyed claimed to have read a work of creative literature  (poems, plays, narrative fiction) in the previous twelve months. The share fell to 54% in 1992, and to 47% 2002. Whether you look at men or women, kids, teenagers, young adults or the middle-aged; we all read less literature, and far fewer books. This is not a small problem, nor is it confined to the book business. The N.E.A. found that active book readers are more likely to play sports, exercise, visit art museums, attend theatre, paint, go to music events, take photographs, volunteer, and vote. Neurologists have demonstrated that the book habit builds a wide range of cognitive abilities. Reading grows powerful and important neural pathways that not only make reading easier as we do more of it, but enable us to analyze, comprehend, and connect information. But for the first time in human history, people all over the world are reading fewer books than they used to. Faced with compelling media alternatives, humans everywhere are abandoning the book. We read, but we are losing the habit of reading deeply. Having conquered illiteracy, we are now threatened by aliteracy. Reviving the book industry is only possible if we can revive the book itself.

LinkedInPrintFriendlyGoogle BookmarksGoogle GmailYahoo MailInstapaperPocket
Economics, Media, Other classics, Technology, WoW

Are Men Overpaid?

More women than men now graduate from college and they earn better grades. But at every level of educational attainment, men still earn more money and the gap grows larger with time. Are we systematically underpaying women? If so, why would labor markets behave that way? To some, the question is laughably simple: greedy male capitalists exploit women. This is true, but greedy capitalists exploit men too. Greed, not fairness, should lead to some rough market price of equivalent skill and experience unless talented women are somehow different from anything else that is bought and sold in quantity. Asserting that bosses are greedy and biased doesn't explain much. After all, greedy bosses cheerfully bid up the price of copper. At the risk of commodifying half the population, it is useful to ask why are they not bidding up the price of  talented women? There are, it turns out, many reasons that women are paid less than men. Most raise issues worth addressing, whether or not they conform neatly to the "glass ceiling" narrative suggested by the infographic on the right. The reasons for pay inquality include:

The scholars accounted for differences in grades, course choices, and previous experience. Their conclusion: kids kill careers. They found that the women’s pay deficit was almost entirely because women interrupted their careers more often and tended to work fewer hours. The rest was mostly explained by career choices: for instance, more women worked at nonprofits, which pay less. A subsequent study by scholars at CUNY, also published by NBER, largely confirmed this finding.

This explanation dodges the underlying question of why are the financial penalties to taking time off so high? After all, if between age 30 and 35, I take a year of maternity leave and work 4 days a week for six or seven years, I might sacrifice two years of work experience between age 30 and 40. Meaning that as a 40 year old woman, I have the same experience as a 38 year old man who contributed zero to raising his kids. Is there really something so magical about the fourth decade of life that missing some work justifies a permanent economic penalty?  A Labor Department study completed in 1992 concluded that time off for career interruptions explain only about 12% of the gender gap (not counting part time work and experience effects and, unlike Goldin, et al, they did not focus only at MBAs).

There remains of course, the threshold question of whether women should be the default caretaker and disproportionately bear the professional cost associated with raising children. In many households of course, they do not -- but this is still the exception.

Most studies do not ask why a profession earned less money to start with. After all, pay in many professions (including teaching) declined as they became more female and pay in some current professions (including law) appears to be going through something similar. Scholars who study these differences often have trouble sorting out historic patterns of gender discrimination from productivity or skill related pay differences.

In some cases, women also seem to choose firms within an industry that pay both men and women less (perhaps because they offer more flexible work arrangements). Janice Madden studied women stockbrokers, for example, whose pay is strictly performance-driven. She documented that although women were assigned inferior accounts and performed as well as men when they were not, a relatively small share of the total pay gap was the result of this unequal treatment. Although the industry paid women quite well, women were more likely than men to work in smaller, less successful brokerages.

There is plenty of evidence that this example was not unusual, even though my response probably was. The problem begins with expectations: women expect to be paid less than men do. A 2012 survey of 5,730 students at 80 universities found that women expected starting salaries that were nearly $11,000 lower than their male classmates. Women veterinarians, who bill their own clients at rates they set, were found to set their prices lower than their male colleagues and to more frequently "relationship price" meaning not charge friends or clients for small amounts of work. A similar effect occurs in law firms, where a lucrative partnership often depends on billed hours. The most prominent scholarly work in this area is by Linda Babcock at Carnegie Mellon, whose book title captured her major finding: Women Don't Ask. Babcock realized the problem when she noticed that the plum teaching assistant positions at her university had gone to men who had bothered to ask about them, not to women, who expected them to be posted somewhere.

The effect on women of not negotiating is huge. According to Babcock, women are more pessimistic about the how much is available when they do negotiate and so they typically ask for and get less when they do negotiate—on average, 30 percent less than men. She cites evidence from Carnegie Mellon masters degree holders that eight times more men negotiated their starting salaries than women. These men were able to increase their starting salaries by an average of 7.4 percent, or about $4,000. In the same study, men's starting salaries were about $4,000 higher than the women's on average, suggesting that the gender gap between men and women's starting salaries might have been closed had more of the women negotiated. Over a professional lifetime, the cost to women of not negotiating was more than $1 million.

Fortunately, this is pretty easy to fix. Women can learn quickly that everything is negotiable. The Jamkid pointed me to a recent investigation by his teacher John List at the University of Chicago, showing that given an indication that bargaining is appropriate, women are just as willing as men to negotiate for more pay. List finds that men remain more likely than women to ask for more money when there is no explicit statement in a job description that wages are negotiable.

Although legislation and litigation will surely be useful to discourage and penalize employers who systematically discriminate against women at scale, as WalMart is alleged to have done, most of the forces that contribute to inappropriately low pay for women will not be remedied in court. Two policy remedies however, could make a large difference and are politically achievable.

California is the only state that currently requires paid parental leave. Initial evidence suggests that the act has doubled maternity leave from three to seven weeks (barbaric by European standards) and raised the wages of new mothers by 6-9%. It's a start, but we should join the modern world, and perhaps follow Denmark, which last I checked required that husbands take equal time away from work on the birth of a child in order to minimize the long term impact on women's earnings. To those who worry that by subsidizing overpopulation in a capacity constrained planet, I would point to declining birth rates throughout Europe and the realization, which is slowly beginning to dawn on the world, that we face far more threats from low birth rates at the moment than we do from high ones.

It might also begin a deeper, more fact-based, discussion about the sources of economic inequality. And such a disclosure would quickly expose the most embarrassing economic fact of all: some men -- but relatively few women -- are shockingly overpaid.

LinkedInPrintFriendlyGoogle BookmarksGoogle GmailYahoo MailInstapaperPocket
Economics, Politics, WoW

Manufacturing Myths in Palo Alto and Pittsburgh

The transition from fields to factories always mixes agony with hope. Families abandon land and traditions that often go back generations, move to cities and reset their lives from sunlight to time clocks. Mass production industries flourish for a few generations. Life is hardly a bed of roses, but it is nearly always better or people would return to the farm -- and nobody returns. Factory work means educated kids, savings, medical care, and consumer goods like refrigerators and cars. Manufacturing jobs may be dangerous or tedious, but they are also deliver opportunity and hope to millions of people.

Eventually, of course, this all changes. Service industries like hospitality, health care, education, or banks grow faster than manufacturing. Consumers buy stuff made elsewhere. In the US at least, income disparity has increased and life experience stratified until many people -- including many with low incomes -- have no understanding of manufacturing or factory work.  Factories seem somehow dirty, Dickensian, and something to be avoided.

The result in the US is a schizophrenic attitude towards manufacturing. We are divided between those see no future for factories and those who believe that manufacturing is vital to our economy. It's Palo Alto vs. Pittsburgh. Sunny Silicon Valley typically sees the future in online technologies, clean tech, or biotech and associate manufacturing with an economic time and place that is as far gone as the family farm. Pittsburgh's workers, managers, policymakers, and professors argue passionately that the decline of the middle class and the decline of manufacturing employment are inexorably linked and urge government action to restore our competitive position. As both a confirmed Silicon Valley technologist and a former machinist, union man, and factory worker, I understand both world views. Perhaps more importantly, I have studied a recent report by my former colleagues at the McKinsey Global Institute that details the role of manufacturing in the US and global economies (click here to download the 170 page report or here for the summary. I highly recommend it and relied on it for most of the data and charts that follow). The punch line: Palo Alto and Pittsburgh both have it wrong, even when their prevailing myths contain elements of past truths. Manufacturing still matters, but for different reasons than either group believes. Lets start with Silicon Valley. Palo Alto sees America through a prism coated in software and web services with an economic future built on service and information businesses. With the quaint and unprofitable exceptions of Tesla, the odd 3D printer, and notwithstanding the atonal musings of Andy Grove, we haven't made anything in Silicon Valley since we drove out disk drives and semiconductors a generation ago. We view manufacturing as a relic of the industrial age, not as an engine of innovation. This belief is held in place by several myths, including: 1. Companies that make things have a lot in common.  

Manufacturing is not a sector: companies that make things vary enormously in the nature of their products, operations, and economics.

Some, like steel or aluminum plants, are incredibly energy intensive and heavy. Manufacturers need to be near water (for transport), raw materials, and cheap power (Alcoa chairman and former Treasury Secretary Paul O'Neill once described aluminum to me as "congealed electricity"). Labor costs are completely secondary.

Pharmaceuticals, in contrast, live or die on product development. They need access to capital, technology, and skilled researchers. A furniture maker needs semi-skilled workers and access to distribution.

It is hardly useful to talk about manufacturing as a single thing -- it really isn't. The McKinsey report tries to segment manufacturers into five groups and describe the requirements and challenges of each, illustrated on the right. The scheme illustrates fundamental differences between manufacturing sectors, although there remain enormous variation even within segments.

These groups require vastly different skills and have fared quite differently in advanced countries, with the final group, the so-called labor-intensive tradables, not surprisingly accounting for the biggest share of job losses.

  2.  Manufacturing is a commodity that contributes little to the US standard of living

Nope: manufacturing matters, just not like it used to. McKinsey found that throughout the developed world, manufacturing is declining in its share of economic activity but contributes disproportionately to a nation's exports, productivity growth, R&D, and innovation.

As the chart on the right illustrates, manufacturing contributes to productivity growth (the basis for all increases in living standards) at about double the rate that it contributes to employment. It also produces spillover effects that are frequently not captured in data about manufacturing.

Manufacturing adds economic value, much of which is transferred to consumers in the form of lower prices (which are economically indistinguishable from a pay increase). On a value-added basis, manufacturing represents about 16% of global GDP, but accounted for 20% of the growth of global GDP in the first decade of this century.

Finally, manufacturing accounts for 77% of private sector R&D, which drives a huge share of technology innovation. It is far from clear that Silicon Valley would exist without it.

  3. Our future is in knowledge-intensive services, not manufacturing.

Once again, our traditional categories are not helpful. Manufacturing frequently is a knowledge intensive business. (It surprises many people to learn, for example, there are more dollars of information than dollars of labor in a ton of US made steel).

Manufacturing is increasingly data intensive. Big Data is revolutionizing manufacturing products and processes, no less than services. Data enables manufacturers to target products to very specific markets. The "Internet of Things" relies on sensors, social data, and intelligent devices to rapidly inform how products are designed, built, and used. Huge data sets have also enabled new ways for manufacturers to gather customer insights, optimize inventory, price accurately, and manage supply chains.

This is not your father's factory. Most US manufacturing jobs are not even in production. As the accompanying chart shows, they are service jobs linked to manufacturing or inside manufacturing companies.

  4. Manufacturing depends on low cost labor, which is why it has fled overseas. 

This particular conceit is endemic in Palo Alto. McKinsey documents one possible reason: no sector, not even textiles, has shifted production overseas as fast as computers and electronics. Indeed, as the chart at the right illustrates, some manufacturing sectors have actually added jobs during the past ten years.

There is a second dimension to this myth however: that manufacturing jobs are factory jobs. As illustrated above, many jobs in manufacturing companies are service like jobs, including R&D, procurement, distribution, sales and marketing, post sales service, back office support, and management. These jobs make up between 30 and 55 percent of manufacturing employment in the US. Much of the work of manufacturing does not involve direct product fabrication, assembly, warehousing, or transportation.

The final misunderstanding is that most factory jobs are unskilled or low paying. In fact, manufacturers world wide are currently experiencing chronic skill shortages. McKinsey projects a potential shortage of more than 40 million high skilled workers around the world by 2020 -- especially in China.

In short, the standard Silicon Valley view is much too narrow: manufacturing is and will remain a high value industry that contributes meaningfully to our standard of living. Manufacturing (some of it, anyway), is a competitive asset. Move east to Pittsburgh, and you will quickly discover that a completely different manufacturing mythology prevails, focused mainly on job-creation. In these parts, the loss of manufacturing jobs is understandably considered a crisis for the US. Politicians pay homage to "good-paying manufacturing jobs" and blame the inability of a high school grad to get a factory job that supports a family, a home, and a motorboat on cheatin' Chinese and union-bustin' outsourcers. Dig a bit deeper, and you will discover that these beliefs are also grounded in economic myths, such as: 1. Manufacturing jobs pay more than service sector jobs.

This view often reflects the wishes of people with a history in "rust belt" manufacturing. In fact, manufacturing jobs pay very much like service jobs do -- except at the very low end, where manufacturing creates far fewer minimum wage service jobs that are common in hospitality and retail.

Part of the reason for this of course, is that manufacturers can have low value work performed overseas -- not an option for McDonalds, Walmart, or others who deliver services face-to-face.

As shown on the right, manufacturing creates about the same number of jobs in each pay band as do service sector jobs, except that there are fewer low-paying jobs and a few more high paying ones. An important caveat is that manufacturing company jobs may be more likely to include benefits, which are excluded from this calculation.

That all said, it is no longer given that manufacturing is a source of better paying jobs.

  2. We should look to manufacturing for the jobs we need.

OK, but at least manufacturing creates decent jobs. Why not promote manufacturing to create jobs -- even if they pay the same as service sector jobs?

The answer depends on your country's stage of development, on domestic demand for manufactured goods, and on how robust your service sector is. For the US, the case for public policies favoring manufacturing is weak.

McKinsey documents what many have observed: manufacturing jobs decline once a country reaches about $7-10,000 GDP/person, as illustrated on the right. This pattern holds both across and within countries. As a result, manufacturing jobs are declining everywhere, except in the very poorest countries (even China is losing manufacturing jobs).

But all low cost labor countries do not enjoy equivalent manufacturing sectors. More important even than stage of development is the level of domestic demand for manufactured goods and the robustness of the domestic service sector. The US and the UK have such large service sectors that we derive smaller share of our GDP from manufacturing, even though in absolute terms both countries have robust manufacturing sectors.

3. Low wage nations like China are stealing our manufacturing jobs.

There are typically two parts to the belief that US jobs are flowing overseas. First is the underlying view that jobs are a zero-sum asset to be fought over like territory. This idea has political salience, but is economic nonsense. Jobs are the complex result of many things including the availability of public or private capital, legal and regulatory systems, local demand conditions, and managerial competence. Cheap Chinese labor is typically the least of it.

The other idea however, is that we can somehow return to 1950 when unionized manufacturing jobs dominated the US economy. This is no more likely than a return to small family farming (and like those who romanticize what Marx aptly termed "the isolation of rural life", those who idealize factory work often have suspiciously clean fingernails).

As the accompanying chart shows, manufacturing as a share of economic activity is in long term secular decline in all high and middle income countries worldwide -- including China. It is only growing as a share of the economy in very poor countries. As the UN has pointed out, Haiti is in desperate need of sweatshops. Vietnam and Burma are growing manufacturing's share of economic output -- often at China's expense.

Manufacturing matters enormously, just like agriculture does. But it is not growing as a share of economic output. (McKinsey highlights one interesting exception to this rule. Sweden has maintained manufacturing as a share of its economy by targeting high growth sectors and especially by investing twice as much in training as other EU countries. Most importantly however, they devalued the krona against the Euro to make exports competitive -- effectively taxing imports).

4. Companies build plants overseas in search of cheap labor

There was a time when labor costs were a determining factor in locating production facilities. This is much less true today, when location decisions are driven by many factors other than labor costs, as the chart on the right illustrates.

Depending on how a company competes and whether it is locating research, development, process development, or production facilities it's location criteria may or may not turn on factor costs such as labor. Proximity to consumers or to talent may matter more. In some cases taxes matter. In other cases access to suppliers matter.

The rising cost of commodity inputs transportation during the past two decades has altered this calculation. Steel, for example, was about 8% iron ore cost and 81% production costs as recently as 1995. Today ore is more than 40% of the cost of a ton of steel and production costs are only 26%. Steel companies care much more about the cost of ore than the cost of labor.

Likewise transportation costs have skyrocketed with energy prices and infrastructure demands (the US grows highway use by about 3%/year and grows highways by about 1%/year. Anyone living here knows the result). Producers from P&G to Ikea and Emerson now are forced to locate plants near customers to minimize transportation costs. As a strategy for plant location decisions, labor arbitrage looks very 1980s.

5. If consumers would only buy local, we could restore our manufacturing base. 

Politicians and union leaders say this all the time and it is sheer idiocy. Most would not be caught dead in my German car, which was designed and made in Tennessee, but beam proudly at the sight of a Buick van imported from China.

High productivity manufacturing benefits consumers, as companies pass on savings to Americans in the form of lower product costs. As illustrated by the chart to the right, most consumer durables cost today about what they did in the 1980s -- and quality is much higher. Economists have estimated that Walmart, Target, and Costco reduce retail prices by 1-3% each year because they pass to consumers savings extracted from manufacturers (this, by the way, is a big reason that manufacturing continues to shrink as a share of our economy. We pay less for our stuff  and more for services like education and health care.)

Americans say we believe in "Made in USA" campaigns, but as consumers, we are famously delusional. When surveyed, we profess to favor locally produced merchandize. But our wallets don't lie: we buy high quality, low cost stuff regardless of where it comes from.

So how do we grow US manufacturing? Same as always: by creating innovative materials, processes, and products. McKinsey sees "a robust pipeline of technological innovations that suggest that this trend will continue to fuel productivity and growth in the coming decades". Of course most innovations are hard to foresee. One reliable source of innovation turns out to be anything that reduces weight, such as nanomaterials, some biotech, light weight steels, aluminum, and carbon fiber. It turns out although we buy more stuff each year, the total weight of our purchases actually declines because nearly everything we buy, including cars and airplanes, weighs less than it used to. Manufacturers have come to appreciate the power and the necessity of innovation. During the Clinton administration debates over CAFE standards, car company engineers soberly advised us that the theoretical limit of internal combustion engines was a 10-15% improvement over the current average of 17 miles per gallon. Today these companies have already doubled that efficiency and speak openly about doubling it again, even as they invest in non-combustion solutions that are even more efficient. In short, manufacturing matters for different reasons than it used to. It used to be a plentiful source of unskilled jobs; today its value is as driver of innovation, productivity improvement, and consumer value. It's an exciting part of the economy, even if it cannot solve every problem we face related to job creation and economic growth.
LinkedInPrintFriendlyGoogle BookmarksGoogle GmailYahoo MailInstapaperPocket
Economics, Media, Politics, Technology, WoW

The iPhone Gets Nuanced

iPhone 5 Keyboard I got the iPhone 5 because it was free. The loathsome ATT charged me $300 and I sold the old iPhone 4 on eBay for that amount. I hate ATT, but their 4G LTE is really fast in the Bay Area and factory unlocking a phone under contract to enable overseas SIMs and free tethering is trivially easy. The new phone is like the old one but bigger, faster and thinner -- reinforcing my view that Apple post Steve is an incremental innovator, not a disruptive one. Most of the changes are the result of a new operating system, not new hardware. But one feature is blowing me away -- totally changing how I use my phone. The new feature is keyboard dictation, which appears on all iOS 6 keyboards, whether you have the new iPhone or not. By dictation, I emphatically do not mean Siri. Siri is a dog that performs a few well-chosen show tricks and inspired at least one hysterical advertising spoof. Siri is very useful for directions, reminders, OpenTable reservations, and a good laugh. Siri entertains -- but dictation delights. Dictation has been around for a decade and on iOS since the 4S and third generation iPad, but it was always more trouble than it was worth. But suddenly, dictation not only works, it works shockingly well. For text messages, emails, tweets, and even first drafts of longer documents it is massively faster to dictate than to type (unfortunately I still need to type blog posts the old way. Maybe that explains the 60 day hiatus...).  I have a hard time understanding why Apple is not using its ad dollars to promote dictation, not Siri -- unless the processing costs are huge and they are losing money on the feature. What changed? In a word, Nuance plus a massive investment in cloud infrastructure. Nuance Communications is the public company behind Dragon Dictate -- which has been the market leader in desktop speech recognition for the past 15 years at least (the company was founded in 1992 out of SRI as Visoneer, known mainly for early OCR software). Neither Apple nor Nuance talk about it, but it looks to many people like Apple has licensed its dictation software, including Siri's front end interpreter, from Nuance. One sign: before Apple bought Siri, it used to carry a "speech recognition by Dragon" label (earlier, Siri had used Vlingo, which apparently did not work as well). Not only that, but Nuance has built several speech recognition apps for the iPhone and iPad that work exactly like the speech recognition built into the iPad and iPhone 5. This is interesting in part because Apple never licenses critical technology for long. It insists on controlling its core technology from soup to nuts, so many people assume that Apple has considered buying Nuance. The problem is that Nuance holds licenses with many Apple competitors who would disappear if Apple bought the company. Apple would need to massively overpay for the asset -- something they never do. More likely, Apple will hire talented speech recognition people and build its own proprietary competing product, just like it did with maps when it declared independence from Google. In this case, figure that dictation will regress for a year or two, just as maps have done, because real time, accurate speech recognition makes maps look simple. Plus Nuance protects its patents aggressively and these patents are, according to some writers, not easy to avoid. Although Google is avoiding them nicely; Android speech recognition is also outstanding. How do they do it? The Google way: throw talent at it. Google hired more PhD linguists than any other company and then they hired Mike Cohen. Cohen is an original co-founder of Nuance and if anyone can build voice recognition without tripping on the Nuance patents, he can. Apple appears likely to pursue a similar course. Mobile dictation works by capturing your words, compressing them into a wave file, sending it to a cloud server, processing it using Nuance software, converting it to text, and sending it back to your device where it appears on your screen. Like all good advanced technology, it passes Arthur Clark's third law: it is indistinguishable from magic. The tricky bit is the software processing, which has to have a rich set of rules based on context. The software decides on the meaning of each word based not only on the sound pattern, but on the words it heard before and after the word it is deciding upon. This is highly recursive logic and nontrivial to execute real time. Try saying "I went to the capital to see the Capitol", "I picked a flower and bought some flour", or "I wore new clothes as I closed the door" and you begin to understand the problem that vexes not only software, but English learners everywhere. Apple dictation handles these ambiguities perfectly -- meaning that it either gets the answer right, or it realizes that there are multiple possible answers, takes a guess, and hovers the alternative so that you can correct it with a quick touch. It takes a little bit of practice to use dictation well. It helps to enunciate like a fifth grade English teacher and to learn how to embed punctuation. The iPhone OS6 User Guide has a list of available commands. Four are all you need: "comma", "period","Question mark", and  "New Paragraph" (or "Next Paragraph"). You can also insert emoticons "smiley" :-), "frowny" :( and "winky" ;-). For anything else, speaking the punctuation usually works: "exclamation point", "all caps", "no caps", "dash", "semicolon", "dollar sign", "copyright sign", "quote", etc. Overall, the experience of accurate mobile dictation is a magic moment -- like the first time you use a word processor or a spreadsheet (for those who recall typewriters and calculators), or the first browser or email (yeah, we didn't used to have those, either). Give it a try. Apple has done something amazing and for once, actually under-hyped it. 
LinkedInPrintFriendlyGoogle BookmarksGoogle GmailYahoo MailInstapaperPocket
Technology, WoW

George McGovern 1922-2012

George McGovern. All photos (c) NYT

When George McGovern ran for president, I was the age the JamKid is now. He was then, and remained, a remarkable and vastly underappreciated American. He was a decorated war hero who had seen gruesome combat and calmly led a massive crusade against the Vietnam war. He was a professor with a PhD in History who would never have dreamed of calling himself "Dr. McGovern" (unlike say, Germany, where most politicians are eager to run as Herr Docktor). He was a democratic Democrat, whose commission reset the party rules and stripped the insiders of much of their power. More than anyone, McGovern closed the smoke-filled rooms (and frankly, made the Party more difficult to govern and more dependent on large donors). He created the UN Food for Peace program and believed profoundly in helping the poor and desperate, even in the face of evidence that foreign aid did not promote economic self-sufficiency. He was an early proponent of dietary guidelines and as early as 1973 warned of the growing amount of sugar in the US diet. I met McGovern a few times and had dinner with him once. He was a modest, self-effacing guy, who knew a surprising amount about labor history. I learned that his dissertation was on the 1913 Colorado coal strikes. He also knew a lot about farming, not simply because he was from South Dakota, but because he had a lifelong aversion to hunger after seeing Italians starving during his wartime service. I learned that he was probably the last person to ever speak to Bobby Kennedy -- whose assassination shook him, and me, even more deeply than the loss of JFK.

Fighting sugar in 1975

McGovern was widely reviled. During his run for the presidency in 1972, the New York Post referred to him as "George S. (for surrender) McGovern" in virtually everything it wrote. He was not a great campaigner, although he brought hundreds of people into politics and many of them stayed -- including Bill Clinton. His hastily-considered choice of Missouri Senator Tom Eagleton as his running mate ranks with McCain's choice of Sara Palin in textbook examples of disastrously poor vetting. Despite an Obama-like grassroots campaign led by campaign manager and future Senator Gary Hart, McGovern lost 49 states to Richard Nixon, the worst landslide in modern US history. Although he later joked that "for many years, I wanted to run for the Presidency in the worst possible way and last year, I did", it had to hurt to lose an election to a man he knew to be deeply dishonest and corrupt.

With campaign staffer, Bill Clinton

In later years, the former minister, professor, Congressman, global food program director, Senator, and presidential candidate ran a 150 bed inn in Stratford, Connecticut. After the business went bankrupt, he reflected often and publicly on the role of government regulations and lawsuits in constraining small business. At one point, he surprised conservatives when he wrote in the Wall St. Journal that "I ... wish that during the years I was in public office I had had this firsthand experience about the difficulties business people face every day. That knowledge would have made me a better U.S. senator and a more understanding presidential contender." Part of why politics in the US works is that people as courageous and talented as George McGovern are drawn to public service. I worry that this is becoming less true. In part due to reforms McGovern  championed, parties are weaker and the path to public office now less dependent on political parties and more dependent on large financial backers than ever before. We are at risk of drawing more light than heat to the national stage. It will be ironic and unfortunate if the result of George McGovern's wonderful career is that we see fewer like him in the future.
LinkedInPrintFriendlyGoogle BookmarksGoogle GmailYahoo MailInstapaperPocket
Politics, WoW

Store Closing: the Death of Brick and Mortar Retail

In 1997, I had an idea: if I could aggregate millions of used, rare, and out of print books from around the world on a single website, I could enable people to find and buy books that were otherwise impossible to locate. Like hundreds of others with similar ideas for selling things online, I started an ecommerce company. As Atlantic writer Derek Thompson points out, that was the year that the US enjoyed an odd service sector convergence: 14 million Americans worked in retail, 14 million in health and education, and 14 million in professional & business services. Fifteen years later, the landscape has changed. "Books You Thought You'd Never Find" is a silly idea. Book retailers are dying. The company I founded has made an impressive effort to transition from retail to services. The employment picture reflects these changes. Health care jobs have grown by almost 50%, professional/business services grew almost 30%, but, as the chart below illustrates, retail grew less than 3%, adding only 26,000 jobs a year. There is mounting evidence that retail employment is about to now decline sharply. Fifteen years from now, these may be the good old days for brick and mortar stores. Retail revolutions are nothing new. Boutiques challenged general stores throughout the nineteenth century. Department stores, arose starting with Wanamaker's in 1896 and challenged boutiques. Starting in the 1920s, car-friendly strip malls challenged main streets. In 1962, Walmart, Target, Kmart, and Kohls each opened their first store and initiated the era of big box retail. In 1995, Jeff Bezos incorporated Cadabra -- but changed the name to Amazon at the last minute, in part because it started with an "A" and most internet search results were alphabetical. Today, e-commerce is not just killing some stores -- it is  killing almost all stores. Today there are very few successful brick and mortar retailers. Consider the obvious losers in recent years -- none much lamented. Demographic changes are also putting pressure on stores. Urbanization hurts strip malls. Baby Boomers no longer have kids at home. Their kids are marrying later and delaying having their own children, meaning fewer are buying houses that need to be updated and furnished. As these Millennials hit their peak spending years, they are completely accustomed to shopping online. For many Millennials, shopping malls were a teenage social venue -- not a place to buy stuff. It is no accident that shopping malls have yet to emerge from the recent recession. There is good reason to expect this change to accelerate. Physical retailers are typically very highly leveraged and operate on narrow profit margins.  Material declines in their top lines make them quickly unprofitable. As stores close or reduce selection, more customers become accustomed to shopping online, which accelerates the trend. E-commerce maven turned VC Jeff Jordan recently cited the example of Circuit City, which was "preceded by just six quarters of declining comp store sales.  They essentially broke even in their fiscal year ending in February 2007; they declared bankruptcy in November 2008 and started liquidating in January 2009".  Nor, Jordan notes, did the bankruptcy of Circuit City help out Best Buy any more than the loss of Borders helped Barnes & Noble. "Not even the elimination of the largest competitor provides material reprieve from brutal market headwinds." There are a few bright spots in the wasteland of retail stores. The need for fresh food and last-minute purchases mean that 7-11, Trader Joe's, and the corner produce grocer have enduring customer demand. Customers may want to touch some high ticket items before buying them, which gives high margin retailers like Williams Sonoma and Apple an opportunity to offer a fun, informing, hands on retail experiences (although both of these companies do a growing share of their business online and Apple lets you pay for a items under $50 on your phone and walk out of the store without ever talking to a salesperson or cashier). Stores like Home Depot that do a significant share of business with contractors who are time-sensitive not price-sensitive and need a large number of items quickly have sustainable value propositions. Online commerce enjoys enormous advantages, from vastly larger selection, much lower fixed costs and debt, to a more customized shopping experience and 24/7 operations. Small wonder that revenue per employee at Amazon is nearing a million dollars, whereas at Wal-Mart -- once a paragon of retailing efficiency -- it is under $200,000. Hundreds of websites, not simply Amazon, have benefitted from the explosion of online retail -- and tens of thousands of small retailers use Amazon or eBay's commerce infrastructure to power specialized businesses. Of course UPS and Fedex benefit as well, in part because they make money on both the initial sale and on the subsequent return of wrong sizes and unwanted gifts. Since the dawn of commerce in the Nile delta, humans have have purchased goods in physical markets. No doubt we will continue to purchase, or at least preview, stuff in stores. But if e-commerce achieves a fraction of the opportunity it currently has in front of it, retail stores as we currently know them will become a thing of the past. It is hard to imagine that we will miss them for long.
LinkedInPrintFriendlyGoogle BookmarksGoogle GmailYahoo MailInstapaperPocket
Economics, Other classics, Technology, WoW

Memo to the New Chancellor: Saving UC Berkeley

Dear Newly Appointed Berkeley Chancellor:

Congratulations! Even though as of this writing, you have not yet been named, you take over the leadership of UC Berkeley at a critical time. At the end of your tenure, the world's premier public university will either have found a sustainable path forward or will have entered a period of long-term decline. Do us a favor -- do not screw this up. You arrive at a moment when higher education is in wonderful and overdue ferment. Online education is challenging your  traditional business model and unsustainable tuition increases. Badges and other alternative credentials threaten your historic right to certify talent. Most of all, Berkeley like other public universities that serve as engines of knowledge-creation and social mobility are under unprecedented financial pressure. Berkeley, in particular, has a lot at stake. It is, as I noted here, an amazing public institution, despite its bottomless capacity for self-parody (as you know, my wife is a dean at Cal). 48 out of 52 Berkeley doctoral programs rank in the top 10 of their fields nationally -- the highest share of any university in the world. By any measure: NSF Graduate Research Fellowships (#1), National Academy of Sciences members on the faculty (#2 behind Harvard), members of the National Academy of Engineering (#2 behind MIT), membership in the American Philosophical Society, the American Academy of Arts and Sciences, or winners of National Medal of Science -- Berkeley excels. It is by a considerable distance earth's finest public university. And it serves a public mission. Berkeley’s single proudest claim, ahead even of its 24 national rugby championships, is that it enrolls more students on Pell Grants than all of the Ivy League schools put together. A Pell Grant is a scholarship based on financial need. By serving academically qualified students on Pell Grants, Berkeley ensures that smart, hard-working kids from low income families access to a top-flight education. You may regret the flow of private funds into a public university, but you cannot and should not try to prevent it. Actually, you will devote a great deal of time to encouraging private donations so that Berkeley can remain accessible to middle income students who are not eligible for Pell Grants. This requires building organizational muscles that atrophied when Berkeley, like most public universities, avoided the intellectually distasteful but indispensable work of raising private funds. Berkeley is still building the endowment required to sustain these efforts. The endowment matters because over time, money buys quality -- just ask Stanford. It is no accident that although many state universities undertake serious research and offer outstanding educations, only three of the "Public Ivies", Texas, Michigan, and California, make the list of America's best-endowed universities. You realize, of course, that the endowment data shown here are misleading in one important respect: the University of California is less a university than a federation of ten highly autonomous campuses ranging from prestigious broad spectrum research institutions like Berkeley, UCLA, and San Francisco to campuses with pockets of excellence like San Diego, Irvine, and Davis, to schools like Riverside and Merced that are not easily distinguished from the State University. These data also illustrate why the Regents hired your boss. Of the great public universities, only the University of Texas took endowment-building seriously, making former UT President Mark Yudoff irresistible as the current president of UC. But Berkeley, along with San Francisco and UCLA, has begun to focus on endowment building. It is no surprise that taken together, these three campuses now hold three-quarters of all UC endowment funds. Faced with the choice of compromising academic excellence, raising tuition to levels that reduce access to higher education for many students, or undertaking a covert privatization to maintain the finances of their institution, all three of these schools have raised tuition and quietly sought private funds. Your job is to continue this course. Soft privatization is not without its  management challenges however, especially at Berkeley. First, Cal is a state-owned enterprise. The barely functional California government retains full and largely unwelcome control over your budget and governance, even though it contributes less each year to your operating revenue. Second, your boss happily taxes richer campuses like yours to support poorer ones, so to raise a dollar of endowment, you will often have to attract more than a dollar of donations. Third, strong faculty governance provisions, while occasionally improving decision quality, mostly serve to protect the comfortably and correctly tenured and prevent needed program rationalization. Your biggest risk is not privatization -- it is paralysis. To get Berkeley fully upright and sailing, you need to mend both a broken income statement (you lose money every year and must stop the bleeding, even if the state makes good on a new round of cuts) and a broken balance sheet (your endowment may be larger than other UC campuses, but it is still pathetic. Look at the data.) Nothing else you do will matter unless you set audacious goals to fix your core economics. I respectfully suggest two:
Granted, $150 million each year does not plug your entire budget shortfall -- but it is a serious start that would be noticed by alumni and other donors. You will need to continue to rationalize campus operations and consolidate weaker units or athletic programs (remember Berkeley had a Mining program until a brave chancellor decided to face reality). With everyone else in California, I wish you the very best of luck in your new position. Just don't mess with the rugby team. Some cows really are sacred.  
LinkedInPrintFriendlyGoogle BookmarksGoogle GmailYahoo MailInstapaperPocket
Economics, Media, Politics, WoW


Media, WoW