Two decades ago, Steve Ballmer proudly showed his mother a copy of Windows 98, the new operating system that featured a "start" button so prominent that Microsoft bought the Stones "Start Me Up" for the launch. When Ballmer's mother asked him how to shut the program down however, Ballmer could have sung "you make a grown man cry" as he confessed that to stop Windows, you press the start button.
Microsoft never understands the deep loathing that it inspires in those forced to use it. Its software is frequently ugly, as Steve Jobs memorably put it, "not in a shallow way, but in a deep way". It is badly architected, contains embarrassing UI goofs, and assumes that users will adapt to it -- which we of course do, by the tens of millions.
Silicon Valley has gone from fearing Microsoft as a monopolists to ignoring it as a zombie. Microsoft famously was so preoccupied with its Windows monopoly that it missed the rise of the Internet. It then missed mobile. Ballmer, suddenly awakened, frantically purchased Nokia, evidently on the theory that two stones sink slower than one. He launched Windows mobile in the charming hope that what iOS and Android developers, handset makers, and users really needed was a third mobile platform. With Bill Gates focused on philanthropy ever since the stock peaked and Ballmer treated as the clown prince of software, many technologists forgot about Microsoft for the last decade.
Which is why recent mobile software from Microsoft is so interesting. Although its new CEO cannot revive a dead man to the extent that "Start Me Up" suggests, Microsoft is nonetheless showing signs of life. The iOS version of Office is not only excellent, it is free. MS will require a subscription only for devices with screens over 10", so laptops and pro tablets will pay, but phones and normal tablets are free. And since a family of 5 licenses is only $9.99/month, MS is setting prices at much more realistic levels.
On iOS, Office is now easier to use and better integrated with cloud storage than Apple's own Pages, Keynote, and Numbers. It has replaced native Apple applications on my phone and tablet. Outlook, which releases a trial 2016 version for OSX and Windows tomorrow, will hopefully move some of its mobile innovations to the desktop.
As Microsoft comes to life, Google is suddenly facing a real threat. Chrome, Google's browser, owns about a two-thirds of all desktop traffic because it is free, stable, and fast. And because Internet Explorer, which Microsoft gives away, was for many years plain goofy.
But desktop browsing is yesterday's market; today's game is mobile. We are approaching a time when every human on the planet will have a smartphone and billions of dollars will be made and lost in services and ads. Which is why a small feature of iOS 9, support for robust ad blockers, is a big deal. In my experience, Safari with ad-blocking enabled transforms the mobile web from endlessly irritating to quite usable.
Apple did not enable ad blocking simply because they thought consumers would like it better -- although we do. They did it to stick a knife in the ribs of Google's only real source of revenue. They appreciate that Google will not happily enable ad blockers in Chrome. Indeed, Google has responded by disabling popular extensions that block ads, planning new subscription services for YouTube, and launching Google Contributor, which enables users to pay Google $2, $5, or $10 monthly depending on how much ad-blocking you want.
Will this work? Will anyone use Chrome if it cannot block ads? Will Apple enable deep ad-blocking in Safari and host free user-generated videos to put a bullet in YouTube? Yeah, they will. I doubt that Apple will kill Google -- but I have no doubt that they will try and I have serious doubts that this week's campaign of moral pleading launched by Google will do anything but make them look pathetic.
A few years ago, a reported noted Google's dependence advertising revenue and asked then CEO Eric Schmidt whether Google was "one-trick pony". "Yes" Schmidt responded, "but it's a very good trick". With Apple now exposing Google to a bit of full-contact capitalism, we will soon know whether or not Google can learn new tricks.
Cities around the world have declared their intent to become "the next Silicon Valley". New York's Silicon Alley, Austin's Silicon Hills, Portland's Silicon Forest, London's Silicon Square, New Zealand's Silicon Welly, Louisiana's Silicon Bayou, Israel's Silicon Wadi, Scotland's Silicon Glen, and Kenya's Silicon Savannah testify to the power of this idea. Promoters have even resorted to puns or worse, e.g., Santiago s Chilecon Valley, Philadelphia's Philicon Valley, and (you cannot make this up) Cape Cod's Silicon Sandbar.
But why not? After all, every city knows the key ingredients. Why shouldn't an ambitious town simply round up a bunch of entrepreneurs and venture capitalists, stir in some startup lawyers, accountants, and angel investors, recruit a bunch of engineers who want lower cost housing, and build ties with the local university? How hard can it actually be?
The fundamental confusion is between emergent systems that are organic, unplanned, and uncontrolled and engineered systems that are linear and guided. In their book Competing on the Edge: Strategy as Structured Chaos, Shona Brown and Kathleen Eisenhardt offer a useful metaphor: rebuilding a prairie. We know all of the ingredients of a prairie. We understand precisely the dozens of plant and animal species that comprise the ecosystem that once stretched from the Rockies to the Mississippi. They point out however, that even with perfect knowledge, if you were to acquire land near O'Hare airport, prepare the ground, and introduce the appropriate plants and animals, what you end up with would not be a prairie. Indeed, it would potentially be nothing like a prairie. (And yes, we have a Silicon Prairie, somewhere in Nebraska I think).
It turns out to be very difficult to re-create an ecosystem, even when we know all the ingredients. To start with, emergent systems are grown, not assembled. And they are not grown from scratch. The actual starting point matters because emergent systems are highly path dependent: past choices shape and constrain future ones. That means that simply introducing seeds and prairie dogs into an acre of land is more likely to result in a patch of weeds than "amber waves of grain".
Worse, we usually don't quite know all of the ingredients of most organic systems. Some are highly contextual (meaning your required ingredients and mine may vary) and some are contingent (they only work some of the time, mainly because our understanding of them is imperfect). It matters what sequence you introduce ingredients -- much like a souffle that collapses unless the beaten egg whites are added last.
Technology regions and prairies are two examples of complex, emergent systems. There are many others, including companies and markets as well as governments and polities. As they grow in complexity, these organizations increase exponentially the number of components and the number of interactions between their components. They become more complex, organic, and self-organizing -- which means you cannot predict how these systems will evolve, much less reproduce this evolution once it happens.
Can systems like this be led? They can be guided successfully only by leaders with a deep appreciation of unintended consequences. In any emergent system, the second and third order consequences of any decision are likely to overwhelm the intended first order effects. You can even look at your life as an emergent system, which is why, as Steve Jobs famously noted, you can connect the dots into a coherent post hoc narrative looking backwards, but you cannot "connect the dots going forwards", i.e., predict anything very meaningful about your life long before it happens.
Douglass North is a wonderful economist who understands the organic and emergent nature of economic systems better than most of his fellow practitioners. He shared the Nobel Prize in Economics in 1993 for documenting the confounding role that institutions, culture, and history play in economic outcomes. Shortly after receiving this award, he was asked whether, since institutions matter so much, he had any advice for Russia.
He thought for a moment and replied: "Get a new history". That is one starting point for any city or region looking to start the next Silicon Valley. The other is provided by Emily Dickenson, who in 1755 had given a lot of thought about how "To make a prairie". Her famous verse, worth contemplation by the ambitious bees of the world:
To make a prairie it takes a clover and one bee,
One clover, and a bee.
The revery alone will do,
If bees are few.
As the value of an automobile moves to its electronics and cars become networked, self-driving computers, will the car industry move to the west coast? Consider:
- Google now logs more than 10,000 miles weekly with self-driving cars — more than a million miles to date. Most of this is Mountain View (also Austin). Nobody else knows as much about navigating self-driving cars and building the underlying mapping technology as Google.
- San Francisco-based Uber is widely expected to begin testing driverless car services. California’s recent move to classify drivers as employees is likely to accelerate this effort. Few can match Uber’s expertise at quickly matching drivers with riders in hundreds of markets worldwide.
- Electric car pioneer Tesla, based in the former NUMMI plant in Fremont, is widely expected to offer driving services that compete with Uber.
- Apple is unlikely to sit this one out — although the extent to which they aspire to build self-driving cars is unclear. The Wall St. Journal announced this week that Apple plans to ship a car in 2019 and has been hiring industry veterans for the secret project (code name Titan, which sounds slightly Titanic).
Ben Evans at Andreessen Horowitz has given some thought to how cars are evolving. He makes several useful points:
- The current focus is on building an autonomous car that can drive down the street and not hit anything. This is the Google skill — and maps are critically important. Maps, he points out, have moved from discretionary accessory to vital necessity.
- As autonomous cars grow, they are likely to be deployed as a service by companies that can optimize a fleet of autonomous on-demand cars in a city on a real-time basis. This takes NetJet type optimization skills and seems well-suited to Uber or Google. It also radically reduces the number of cars (and parking spaces) the world will need. The industry will shrink.
- The buyer and thus the car is likely to change — and this will change the cars. Cars will become simpler, partly because they are increasingly electronic. At first they will still have a steering wheel. Eventually however, they may not have brake pedals or windshield wipers. And if you are summoning a fleet car, they will be purchased based not on Tesla-elegant styling, but by corporate fleet managers, much like PCs are today.
Even high tech cars will remain a large business. Indeed, that’s what attracts Apple and Google. There are simply not many industries left to revolutionize that can move the dial on companies with revenue measured in the hundreds of billions of dollars. Evans’ chart showing the place of the $1.2 trillion global car industry in the minds of technology CEOs is instructive.
As Evans notes however, this is today’s market — the one that looks to get smaller.
“Some analysts are talking about unit sales halving over time (with growing demand from China and other newer markets offsetting new technology). Meanwhile, moving to electric can reduce the price of a car, or of course (Apple’s preferred option) expand margins.”
He notes that even if Apple goes after the premium car segment, as seems likely,
“the bubble on the chart above shows Mercedes-Benz, BMW, Audi and Lexus, which combined sell 5-6m cars a year for $220bn in revenues (and so averaging $40,000 per car). That’s where Tesla is aiming now, and where one might expect autonomous cars to arrive first. For comparison, iPhone revenue in the last 12 months was $146.5bn….To look at that another way, if Apple created a car business as big as BMW and Mercedes combined, that business would generate less profit than the iPhone.” (my emphasis)
In short, yes — Silicon Valley will be the next Detroit. Seat belts advised — this will be a wild ride.
My reading for 2014 was more focused for the same reason my blogging has been reduced -- I have a day job. Also, two of the books were real projects, entailing runs to the sources, blogosphere, and Wikipedia for background, context, and understanding. These books were demanding (long and in places technical) and singular -- representing a new peak for their genre. I enjoyed both Thomas Piketty's 696 page Capital and the 21st Century and Andrew Robert's 976 page Napoleon: A Life enough to work through them both. Piketty is of course the French economist who appears to be famous everywhere except France (a problem he corrected this morning by refusing the Legion d'Honeur, declaring that "it is not the job of government to determine who has honor". Spoken like a French historian). Piketty has wrestled more deeply and seriously with the problem of inequality than any previous economic historian. He actually has produced three books: an amazing history of inequality based on European census data rarely deployed for these purposes, an economic analysis of the causes of inequality. He argues that if r, the rate of return on assets, exceeds g, the overall rate of economic growth, family wealth will grow faster than the economy, and can become gigantic. It is a widely derided formulation, but well argued and defended. His third book consists of recommendations for taxing wealth that are largely laughable -- as he all but concedes. Robert's book doubles as a history of 19th century France and, for that matter, Europe. Napoleon led not only a remarkable and consequential life, but one that shaped a great deal of modern Europe. He wrote some 30,000 letters during his lifetime, and Roberts is the first scholar to have access to the full pile, lucky guy. The book is compelling and very well-written. If anything, it needed to be longer. Both books are worthwhile investments, even if both will occasionally leave you screaming -- Piketty for what he fails to understand about business and technology, Roberts because his subject can be so simultaneously brilliant and boneheaded. (Moscow? Really?) I read from several other categories: books on startups, business and economic history, math and science books, and pulp detective fiction, especially noir. Here are the winners: Startups Entrepreneurship is in the air -- often comically. Books that fanned the startup flames this year included Brad Feld's, Venture Deals: Be Smarter Than Your Lawyer and Venture Capitalist, the remarkable Ben Horowitz's The Hard Thing About Hard Things: Building a Business When There are No Easy Answers, and Sean Ellis' Startup Growth Engines: Case Studies of How Today's Most Successful Startups Unlock Extraordinary Growth. All three of these writers are deeply experienced at the trials of early stage companies and all have useful lessons to teach. Feld offers a field guide to the legal aspects of starting a company, to which I would say simply, read his book, then hire a competent lawyer. Horowitz writes excellent columns, which are less compelling when assembled into a book. Sean Ellis has put together a useful set of lessons around specific startups he has worked with. Top startup books of 2014 were Ash Maurya's Running Lean: Iterate from Plan A to a Plan That Works. This, finally, is the book that Steve Blank would have written in his groundbreaking The Four Steps to the Epiphany -- which instead emerged as one of the worst expressions of great thinking ever put to print. In most cases bad writing is the soft underbelly of bad thinking -- but Blank proves that sometimes you just need an editor. Eric Riess, who has built a franchise around "Lean Startups" by rewriting Blank and adding ideas like LAMP, does not come close to producing the how to manual for a startup team that Ash Maurya has written. This really is the go to book for early stage technology entrepreneurs. You won't get much practical advice from Zero to One: Notes on Startups or How to Build the Future by the estimable Peter Thiel. Parts of this book will delight you, and some of it should outrage you. This book derives from Thiel's widely followed Stanford course, which was capably blogged by his coauthor, Blake Masters. The book has moments of brilliance however -- notably his description of how companies actually compete and his relentless search for companies that go from zero to one (bringing something altogether new into the world) as opposed to from 1 to many (scaling up derivative businesses). Marc Andreesen, who needs to write a book and is smart enough to write a very good one, likes to say that Peter Thiel is always half right. Seems correct -- and the half that is right is also highly (Zero to One) original. Business History Nominations include the aforementioned Dr. Piketty, Anita Raghavan's The Billionaire's Apprentice: The Rise of The Indian American Elite and the Fall of the Galleon Hedge Fund, Bryce Hoffman's well-told American Icon: Alan Mulally and the Fight to Save Ford Motor, John Brooks', Business Adventures: Twelve Classic Tales from the World of Wall Street, Martin Wolf's The Shifts and the Shocks: What We've Learned-and Have Still to Learn from the Financial Crisis, Robert Litan's Trillion Dollar Economists: How Economists and Their Ideas Have Tranformed Business, William Rosen's The Most Powerful Idea in the World: A Story of Steam, Industry, and Invention, and Vaclav Smil's Making the Modern World: Materials and Dematerialization. Raghavan, Hoffman, and Brooks give us well-told tales. The disgusting and criminal behavior of my former McKinsey colleagues, the turnaround of Ford Motor, and the Bill Gates-endorsed collection of New Yorker business essays all make for great and educational reading. Martin Wolf and Robert Litan wrote exceptional books for those interested in financial economics and another post mortem on recent financial crises -- although I had quite my fill of those in 2012-13. And I have an obvious soft spot for the sort of well-researched, highly readable economic and technology histories that Rosen and Smil have written. In a competitive year, I pick Marc Levinson's The Great A&P and the Struggle for Small Business in America, Brad Stone's The Everything Store: Jeff Bezos and the Age of Amazon, and Steve Johnson's How We Got to Now: Six Innovations That Made the Modern World as the year's best books of business history. Levinson's book does not quite rise to the level of The Box, but is nonetheless a really well told tale of retail innovation -- a story that Brad Stone picks up with the history of Amazon. Based on my own front row seat at some of the events he describes, Stone gets a lot of things right about Amazon's culture and history. It is an amazing company, even if not always an attractive one. Steve Johnson exceeded my low expectations for a book made into a PBS show and structured around seemingly random innovations. But the book works and brings with it more delightful insights per page than any in recent memory. You cannot know in advance how the sack of Constantinople will lead directly to telescopes but Johnson traces the path with confidence without inferring causality where none exists. Great read. Math and Science This year's contenders are Alan Lightman's The Accidental Universe: The World You Thought You Knew, John Brockman's Thinking: The New Science of Decision-Making, Jordan Ellenberg's How Not to Be Wrong: The Power of Mathematical Thinking, and Joshua D. Angrist's Mastering 'Metrics: The Path from Cause to Effect. Lightman is an unusual writer, as the first professor to receive appointments from MIT in both the sciences (he is a physicist) and the humanities. He offers looks at the universe from several perspectives -- not all equally successful. His opening chapter, entitled the Accidental Universe, is the strongest and by itself a remarkable read. Brockman is closely associated with the Edge, a foundation that brings together thinkers from a wide range of disciplines. His book touches on developmental in neuroscience, decision theory, linguistics, problem solving, and more, but consists mainly of unedited transcripts of informal discussions or presentations, presumably at Edge conferences he has hosted. This gives the book a stream of consciousness feel, and leaves it vulnerable to rambling, repetition, and superficiality. Ellenberg writes well and not especially technically. Fine book overall but, like Lightman, the first chapter (on lessons by Abraham Wald on the value of constantly looking for reasons why you could be wrong) has the strongest material. Angrist's book on econometrics was wonderfully organized and well written -- but the higher math got away from me. I liked it even if I did not understand it all. Best book in this category goes to Nate Silver for The Signal and the Noise: Why So Many Predictions Fail-but Some Don't. The book is not good because Silver famously called the Presidential election correctly -- it is a genuinely good summary of the science of prediction, which is a vital part of science and business, not just politics. But prediction is difficult, "especially concerning the future" as Niels Bohr famously noted. We fall prey to cognitive biases and often mistake noise for signal. Silver does an excellent job of walking a general reader through the swamp. Social Criticism I am a sucker for books on causes or those written to expound a strong point of view. This year's pile included Adam Minter's Junkyard Planet: Travels in the Billion-Dollar Trash Trade, two books on the economics of higher education -- Elizabeth Armstrong's Paying for the Party and Joel Best's The Student Loan Mess: How Good Intentions Created a Trillion Dollar Problem, Megan McArdle's The Up Side of Down: Why Failing Well Is the Key to Success, Michael Pollan's Second Nature: A Gardener's Education, Jonathan Safron Foer's Eating Animals, and Steve Levitt's Think Like a Freak: The Authors of Freakonomics Offer to Retrain Your Brain. Minter is a great writer, as any New Yorker reader knows. Trash matters -- but ultimately not enough to keep me interested. The higher ed books both fail to segment the problem. Some higher education is really valuable and essentially self-financing and much is not. Average data tells you little, except that the debt problem is unsustainably large. McArdle knows her Dylan: "she knows there is no success like failure, but that failure is no success at all". She fails to deliver a book's worth of insights -- although she is a fine writer and terrific blogger. Pollan's book is great, period. Safron Foer somewhat crudely attempts to moralize factory farming -- a topic that others, notably Pollan, have addressed far more more effectively as an omnivore's dilemma instead of a vegetarian manifesto. Levitt's book is Freakonomics II -- good, clean behaviorist fun, just like the first one. The winners in this category are James Fallow's China Airborne and Tyler Cowan's Average Is Over: Powering America Beyond the Age of the Great Stagnation. Fallows is a great essayist and social thinker -- you can learn from almost anything he writes and when he combines his love of China with his love of airplanes, run, don't walk to read this book. Tyler Cowan, whose Marginal Revolution blog is indispensable to economic thinkers, has exposed the forces that underlie as much of the inequality problem as r>g -- that in many fields, a huge share of the income goes to the top talent and that technology appears to make this problem worse, not better. Pulp Fiction Now to the fun stuff. I binge read police procedural and detective fiction, especially noir. In earlier years, I pigged out on the complete Raymond Chandler, Lee Child, and Michael Connelly. This year I happily read the complete Barry Eisler, whose ten or so John Rain novels, mostly set in Tokyo, are terrific escapism and helpfully move noir fiction out of Los Angeles. Rain is an attractive character (evidently to be played by Kenau Reeves in forthcoming movies) who combines the mandatory brooding nature, love of scotch, jazz and beautiful women with a wonderful introspection and deftness at the killer's craft. Eisler is, of all things, an accomplished Silicon Valley attorney with an obvious love of Japan and the genre. Highly recommended. (Note that Eisler decided unhelpfully to rename all of his novels, so they all have new pub dates and you will need to search a bit to figure out the sequence, which matters. Pro tip to Barry: next time, number the titles).In both the US and in Europe, the use of bicycles in cities has shot up. According to the League of American Bicyclists (which endorses none of what follows), bike use has gone up 39 percent nationally since 2001. In the seventy largest US cities, commuter bike use is up 63 percent. Leading the pack is San Francisco, where I bike to work most days; Chicago, New York, and Washington have also seen huge increases. European cities, which generally had a head start, have also seen an increase in bike commuting. The growing ubiquity of City Bikes (public rental bikes designed for short urban commutes. In London, "Boris Bikes" after the mayor who sponsored them.) has accelerated this trend. Cycling is safe. Mile for mile, your odds of dying while walking or cycling are essentially the same. Surprisingly, even with more cyclists on the road, fewer cyclists are getting killed by cars. From 1995 to 1997, an average of 804 cyclists in the United States died every year in motor-vehicle crashes. During an equivalent three-year period from 2008 to 2010, that average fell to 655. (The number rose again in 2011; not clear why). Credit does not appear to be because of bike helmets, which continue to generate serious debates. On balance, they seem to prevent death from skull fractures but do little to prevent brain injury from concussion. Traffic laws that slow cars down make a big difference. According to the Economist, dying while cycling is three to five times more likely in America than in Denmark, Germany or the Netherlands mainly due to cars traveling more than 30 mph. Europe frequently has traffic "calming" laws to slow cars down when bikes are nearby. It also helps pedestrians: A pedestrian hit by a car moving at 30mph has a 45% chance of dying; at 40mph, the chance of death is 85%, according to Britain's Department of Transport. The British seem to gather better national data on cycling accidents than anyone else, although they appear to be far worse statisticians (they unhelpfully conclude that most bike accidents occur during those times when people ride bikes, for example). Nonetheless, they document a finding that will surprise no experienced urban cyclist: "Almost two thirds of cyclists killed or seriously injured were involved in collisions at, or near, a road junction..." In other words, cars kill cyclists at intersections. Knowing this, the single largest safety priority of every urban cyclist must be to avoid cars where possible; yield to them where not. Making this your number one safety priority brings with it some surprising implications.
- Distrust green lights. More cyclists die at green lights than red ones because more cyclists enter intersections on green lights. Too often they believe they are safe, but at a green light, a car motoring beside you will suddenly turn right or an oncoming care will turn left without signal or notice. Pedestrians will jaywalk your green light. Green lights are dangerous, partly because they appear safe. A smart cyclist treats a green light like a yellow one: slow down and prepare to stop.
- Proceed on red if the intersection is empty. Remember, your top priority is to avoid cars, which at red lights are politely standing still. If it is a normal city intersection that is empty and you can see nobody is approaching from any direction, then go. Ride through the intersection and enjoy 100-200 yards of traffic-free riding. You are very unlikely to die at an empty intersection. Note that this rule emphatically does not apply to high speed intersections, which cyclists should simply avoid altogether. Nor is this is a strategy for speeding up your trip: like drivers, cyclists in a hurry are a menace.
Talk to a book author, publisher, or retailer about the future of their business and the denials begin. "Never before have there been so many good books to read". "Books are the backbone of civilization". "Life without books is unimaginable". As a cultural argument, this may be true, but as an economic one, this is a view only of the supply side. An alleged ancestor of mine once observed that "facts are stubborn things". Forget for a moment about our beloved books. Imagine a product you care little about: chemicals perhaps, motorcycles, or furniture. Imagine the industry that produces this product: the designers, manufacturers, marketers, distributors, and retailers. You can determine the health of this industry with a few key measures. We can apply these same vital signs to the publishing business to determine whether our loved one has long to live.
Sales. Not every industry with declining sales is in trouble, but most are. According to BookScan, adult nonfiction print unit book sales peaked in 2007 and have declined each year since. Retail bookstore sales peaked the same year and have also fallen each year according to the U.S. Census Bureau. I sold an online book business I had started at about that time for a simple reason: I couldn’t figure out how to keep growing it. eBooks are growing fast but do not close the gap. Print sales dropped 17% from 2010-2011, and e-books grow 117% (I have borrowed liberally from a nice fact set gathered here). The result was a 5.8% decline in total book sales according to the Association of American Publishers. Combined print and e-book sales of adult trade books fell by 14 million units in 2010, according to the Book Industry Study Group.
Unit economics. OK, but you can make good money in shrinking industries. You may sell fewer items, but make more on each item you sell. So long as you add more value than cost, customers will happily pay you for your product, even in a shrinking market. So what is happening to the unit economics of books? Start with prices. Book prices have gone up every year for more than ten years -- a good sign, right? Not necessarily, because the number of books sold continues to shrink as noted above. But the number of books published is exploding. Bowker reports that over three million books were published in the U.S. in 2010. 316,480 of these were new traditional titles – meaning that publishers introduce 867 new books every day. But that is the traditional tip of the publishing iceberg: 90%, or more than 2.7 million, books published were “non-traditional” titles in 2010. These are mainly self-published books, reprints of books in the public domain, and resurrected out of print books. They vary enormously. Some become best selling light porn for housewives. Others are spam created by software that pirates an existing title in a few hours (one professor wrote a program that has produced 800,000 specialized "books" for sale on Amazon). Others are highly specialized books not attractive to a publisher. When Baker and Taylor reports that book prices are going up, they are describing traditional, not “non-traditional” publishing. What about costs? The cost of bringing a traditional book to market is high and largely fixed, so the declining unit sales of the average book is a huge problem. According to BookScan, the best count we have, Americans only bought 263 million adult nonfiction books 2011, meaning that the average U.S. nonfiction book now sells fewer than 250 copies per year (collectors of first editions take note: it is second editions that turn out to be rare!) Only a few titles become big sellers. In a randomized analysis of 1,000 business books released in 2009, only 62 sold more than 5,000 copies, according to the New York Times. So we have an industry that is shrinking and being divided among many more products and players, few of which can make money. Not a good sign.
Marketing and distribution. Well, maybe better marketing and more rational distribution can target new customers and grow demand while reducing costs. It happens in hard goods and industrial businesses all the time. Can we fix the book market? Investments in marketing are very tough to justify. A publisher needs to acquire the book (pay an advance to the author), then develop (edit) it, design, name, print, launch, distribute, warehouse, sell, and handle returns (about a quarter of all books flow backwards from retailer to distributor or publisher -- a huge cost avoided by eBooks). After all of this, the average conventional book generates only $50,000 to $200,000 in sales, which radically limits how much publishers can invest in marketing. Increasingly, publishers operate like venture capitalists, putting small amounts of money to work to see what catches on and justifies additional investment. Only proven (or outrageous) authors attract large marketing budgets. Increasingly, book marketing is done by authors, not publishers. But how does an author market a book? The only way she can: to her friends and community. With too much to read, we read what our friends advise (women especially read with their friends. Publishers are intensely interested in what reading groups choose to read in an era where there is no general audience for nonfiction and fiction is highly segmented). Some products catch on and a few become blockbusters -- even some books that begin as unconventional titles without publishers. So marketing is tough to fix -- how about distribution? It’s a disaster. Retail is hopeless: your chances of finding any given book in a bookstore are less than 1%. For example, there are about 250,000 business titles in print. A small bookstore can carry 100 of these titles; a superstore perhaps 1,500. Stores are for best-sellers – if there are any. Online can carry every title, but really, who needs 250,000 business books? Online selling solves this problem, but enjoys such massive returns to scale that concentration is unavoidable. In the latest count, Amazon had a 27% share of all book sales (including, I estimate, about two thirds of all eBook sales -- meaning that they monopolize the only part of the book business that is growing). Underlying demand. OK, fine -- the industry is broken. But music was busted too: retailers evaporated, product proliferated and went digital, the label's value-added shrank, and piracy was a much bigger issue than in books. And despite it all, music is making a comeback: sales are up if you count concerts and ring tones. A few people make a living at it and a few become stars. Is this the future of books? It doesn't look that way for reasons articulated by Steve Jobs: although people still listen to music, many people have simply stopped reading books. Speaking about the Amazon Kindle, he argued:
It doesn’t matter how good or bad the product is, the fact is that people don’t read anymore. Forty percent of the people in the U.S. read one book or less last year. The whole conception is flawed at the top because people don’t read anymore.”
Ouch. Setting aside Jobs known tendency to dismiss technologies that he later pursued, is demand for books actually dropping? Are we even reading the books we buy? The evidence is overwhelming that we read fewer books than we used to. As summarized nicely in the New Yorker, the National Endowment for the Arts has since 1982 teamed with the Census Bureau to survey thousands of Americans about our reading. When they began, 57% of Americans surveyed claimed to have read a work of creative literature (poems, plays, narrative fiction) in the previous twelve months. The share fell to 54% in 1992, and to 47% 2002. Whether you look at men or women, kids, teenagers, young adults or the middle-aged; we all read less literature, and far fewer books. This is not a small problem, nor is it confined to the book business. The N.E.A. found that active book readers are more likely to play sports, exercise, visit art museums, attend theatre, paint, go to music events, take photographs, volunteer, and vote. Neurologists have demonstrated that the book habit builds a wide range of cognitive abilities. Reading grows powerful and important neural pathways that not only make reading easier as we do more of it, but enable us to analyze, comprehend, and connect information. But for the first time in human history, people all over the world are reading fewer books than they used to. Faced with compelling media alternatives, humans everywhere are abandoning the book. We read, but we are losing the habit of reading deeply. Having conquered illiteracy, we are now threatened by aliteracy. Reviving the book industry is only possible if we can revive the book itself.
- Deliberate discrimination. Don't kid yourself -- this happens a lot. Goodyear Tire famously paid women less then men who did the same work and took pains to conceal their policy. The Lilly Ledbetter Equal Pay Act resulted when Ledbetter discovered the facts and sued Goodyear. It is interesting to ask how the story would have been different if the fact of discriminatory pay had been known all along. Would women have demanded raises? Or would they have changed jobs and, if they did, would the result have been higher or lower aggregate pay for women?
- Motherhood-induced time off and reductions in work hours. Part of the reason that women earn less is that they take more time off to raise kids. When women first graduate, pay differences are less than they will ever be, for reasons we will touch on later. By age 30 however, significant differences emerge. Harvard's Larry Katz and Claudia Goldin joined with Chicago's Marianne Bertrand to follow nearly 3,000 M.B.A.’s over 15 years. The women started off making 88 percent as much as men, but 10 to 15 years later, they made only 55 cents for every dollar of men’s pay.
The scholars accounted for differences in grades, course choices, and previous experience. Their conclusion: kids kill careers. They found that the women’s pay deficit was almost entirely because women interrupted their careers more often and tended to work fewer hours. The rest was mostly explained by career choices: for instance, more women worked at nonprofits, which pay less. A subsequent study by scholars at CUNY, also published by NBER, largely confirmed this finding.
This explanation dodges the underlying question of why are the financial penalties to taking time off so high? After all, if between age 30 and 35, I take a year of maternity leave and work 4 days a week for six or seven years, I might sacrifice two years of work experience between age 30 and 40. Meaning that as a 40 year old woman, I have the same experience as a 38 year old man who contributed zero to raising his kids. Is there really something so magical about the fourth decade of life that missing some work justifies a permanent economic penalty? A Labor Department study completed in 1992 concluded that time off for career interruptions explain only about 12% of the gender gap (not counting part time work and experience effects and, unlike Goldin, et al, they did not focus only at MBAs).
There remains of course, the threshold question of whether women should be the default caretaker and disproportionately bear the professional cost associated with raising children. In many households of course, they do not -- but this is still the exception.
- Experience gaps. People with more experience are paid more if experience translates into superior productivity. Women who leave the workforce to have children pay twice: once because they work fewer hours and a second time because they accumulate less experience. This is reflected in their earnings when they return. This gap should be erased when a woman has been back at work awhile and research indicates that it takes about four years if a woman returns to work full-time, which many do not.
- Occupational differences. To the extent that some occupations are more heavily male and higher paid while others are more heavily female and lower paid, earnings gaps may indicate a problem of occupational mix. As women enter traditional male occupations, as has occurred very widely during the past 40 years, these effects lessen. Some scholars figure that most of the reduction of the gender gap during the last four decades is due to women entering traditionally male occupations; others determine that only a third of the gap was closed this way.
Most studies do not ask why a profession earned less money to start with. After all, pay in many professions (including teaching) declined as they became more female and pay in some current professions (including law) appears to be going through something similar. Scholars who study these differences often have trouble sorting out historic patterns of gender discrimination from productivity or skill related pay differences.
- Attraction to less successful industries or firms. Some industries and some companies pay both men and women less than do other industries. If women choose these industries or companies disproportionately, their average earnings will be lower compared to average men's earnings, even if they are receive equal pay for equal work. Lower productivity industries, notably service intensive retail, education, and some health care, pay both women and men less than do higher productivity industries. The problem, of course, is that pay gaps persist within industries, not simply between them -- but it is the averages that make for enticing infographics. A Labor Department study completed in 1992 concluded that 22% of the gap between men and women's earnings could be explained by variation in industries.
In some cases, women also seem to choose firms within an industry that pay both men and women less (perhaps because they offer more flexible work arrangements). Janice Madden studied women stockbrokers, for example, whose pay is strictly performance-driven. She documented that although women were assigned inferior accounts and performed as well as men when they were not, a relatively small share of the total pay gap was the result of this unequal treatment. Although the industry paid women quite well, women were more likely than men to work in smaller, less successful brokerages.
- Lower expectations and inferior bargaining. Most women do not bargain for their salaries as aggressively as do most men. I once had to explain to a VP I had hired that I had expected her to counter my initial compensation offer, not simply to accept it. As a result, I was underpaying her relative to her colleagues and industry norms. I was mildly annoyed as I explained that I would pay her more than she agreed to but expected her to take a more aggressive view of her economic value in the future.
There is plenty of evidence that this example was not unusual, even though my response probably was. The problem begins with expectations: women expect to be paid less than men do. A 2012 survey of 5,730 students at 80 universities found that women expected starting salaries that were nearly $11,000 lower than their male classmates. Women veterinarians, who bill their own clients at rates they set, were found to set their prices lower than their male colleagues and to more frequently "relationship price" meaning not charge friends or clients for small amounts of work. A similar effect occurs in law firms, where a lucrative partnership often depends on billed hours. The most prominent scholarly work in this area is by Linda Babcock at Carnegie Mellon, whose book title captured her major finding: Women Don't Ask. Babcock realized the problem when she noticed that the plum teaching assistant positions at her university had gone to men who had bothered to ask about them, not to women, who expected them to be posted somewhere.
The effect on women of not negotiating is huge. According to Babcock, women are more pessimistic about the how much is available when they do negotiate and so they typically ask for and get less when they do negotiate—on average, 30 percent less than men. She cites evidence from Carnegie Mellon masters degree holders that eight times more men negotiated their starting salaries than women. These men were able to increase their starting salaries by an average of 7.4 percent, or about $4,000. In the same study, men's starting salaries were about $4,000 higher than the women's on average, suggesting that the gender gap between men and women's starting salaries might have been closed had more of the women negotiated. Over a professional lifetime, the cost to women of not negotiating was more than $1 million.
Fortunately, this is pretty easy to fix. Women can learn quickly that everything is negotiable. The Jamkid pointed me to a recent investigation by his teacher John List at the University of Chicago, showing that given an indication that bargaining is appropriate, women are just as willing as men to negotiate for more pay. List finds that men remain more likely than women to ask for more money when there is no explicit statement in a job description that wages are negotiable.Although legislation and litigation will surely be useful to discourage and penalize employers who systematically discriminate against women at scale, as WalMart is alleged to have done, most of the forces that contribute to inappropriately low pay for women will not be remedied in court. Two policy remedies however, could make a large difference and are politically achievable.
- Paid parental leave. Paid leave for new parents is essentially a tax on non-parents (we already tax non-parents by allowing parents to deduct children as dependents). This makes sense -- we want to reduce the economic penalty associated with having children. Every modern country except for the United States grants mothers and fathers the right to take time off work with pay following the birth or adoption of a child. European countries provide a period of at least 14 to 20 weeks of parental leave, with 70 to 100% of wages replaced. These countries also provide paid parental leave subsequent to this, although the duration of the job-protection and leave payment differs substantially across nations. The total duration of paid leave exceeds nine months in the majority of advanced countries. Canada, for example, currently provides at least one year of paid leave, with around 55% of wages replaced, up to a ceiling.
California is the only state that currently requires paid parental leave. Initial evidence suggests that the act has doubled maternity leave from three to seven weeks (barbaric by European standards) and raised the wages of new mothers by 6-9%. It's a start, but we should join the modern world, and perhaps follow Denmark, which last I checked required that husbands take equal time away from work on the birth of a child in order to minimize the long term impact on women's earnings. To those who worry that by subsidizing overpopulation in a capacity constrained planet, I would point to declining birth rates throughout Europe and the realization, which is slowly beginning to dawn on the world, that we face far more threats from low birth rates at the moment than we do from high ones.
- Structured Dislosure. The second step we could take is to force employers to disclose wage disparities among people who do the same work. This requirement needs a careful touch and FASB/SEC rulemaking, but if a public company had to disclose the difference in pay between men and women by occupational category, it would quickly become a benchmark that everyone from board members to managers would need to look at and live with. Scholars and consultants would quickly compare businesses. Websites like Glass Door would post comparisons. Companies would feel pressure to either justify disparities with additional data (showing, for example, that men had more experience or more training within the same occupational category) or face demands to explain themselves. Women would be emboldened by data to ask for their due. These metrics would not be susceptible to industry or occupation differences, nor would they disclose salaries - only the percentage difference between men's pay and women's.
It might also begin a deeper, more fact-based, discussion about the sources of economic inequality. And such a disclosure would quickly expose the most embarrassing economic fact of all: some men -- but relatively few women -- are shockingly overpaid.
Eventually, of course, this all changes. Service industries like hospitality, health care, education, or banks grow faster than manufacturing. Consumers buy stuff made elsewhere. In the US at least, income disparity has increased and life experience stratified until many people -- including many with low incomes -- have no understanding of manufacturing or factory work. Factories seem somehow dirty, Dickensian, and something to be avoided.The result in the US is a schizophrenic attitude towards manufacturing. We are divided between those see no future for factories and those who believe that manufacturing is vital to our economy. It's Palo Alto vs. Pittsburgh. Sunny Silicon Valley typically sees the future in online technologies, clean tech, or biotech and associate manufacturing with an economic time and place that is as far gone as the family farm. Pittsburgh's workers, managers, policymakers, and professors argue passionately that the decline of the middle class and the decline of manufacturing employment are inexorably linked and urge government action to restore our competitive position. As both a confirmed Silicon Valley technologist and a former machinist, union man, and factory worker, I understand both world views. Perhaps more importantly, I have studied a recent report by my former colleagues at the McKinsey Global Institute that details the role of manufacturing in the US and global economies (click here to download the 170 page report or here for the summary. I highly recommend it and relied on it for most of the data and charts that follow). The punch line: Palo Alto and Pittsburgh both have it wrong, even when their prevailing myths contain elements of past truths. Manufacturing still matters, but for different reasons than either group believes. Lets start with Silicon Valley. Palo Alto sees America through a prism coated in software and web services with an economic future built on service and information businesses. With the quaint and unprofitable exceptions of Tesla, the odd 3D printer, and notwithstanding the atonal musings of Andy Grove, we haven't made anything in Silicon Valley since we drove out disk drives and semiconductors a generation ago. We view manufacturing as a relic of the industrial age, not as an engine of innovation. This belief is held in place by several myths, including: 1. Companies that make things have a lot in common.
Manufacturing is not a sector: companies that make things vary enormously in the nature of their products, operations, and economics.
Some, like steel or aluminum plants, are incredibly energy intensive and heavy. Manufacturers need to be near water (for transport), raw materials, and cheap power (Alcoa chairman and former Treasury Secretary Paul O'Neill once described aluminum to me as "congealed electricity"). Labor costs are completely secondary.
Pharmaceuticals, in contrast, live or die on product development. They need access to capital, technology, and skilled researchers. A furniture maker needs semi-skilled workers and access to distribution.
It is hardly useful to talk about manufacturing as a single thing -- it really isn't. The McKinsey report tries to segment manufacturers into five groups and describe the requirements and challenges of each, illustrated on the right. The scheme illustrates fundamental differences between manufacturing sectors, although there remain enormous variation even within segments.
These groups require vastly different skills and have fared quite differently in advanced countries, with the final group, the so-called labor-intensive tradables, not surprisingly accounting for the biggest share of job losses.2. Manufacturing is a commodity that contributes little to the US standard of living
Nope: manufacturing matters, just not like it used to. McKinsey found that throughout the developed world, manufacturing is declining in its share of economic activity but contributes disproportionately to a nation's exports, productivity growth, R&D, and innovation.
As the chart on the right illustrates, manufacturing contributes to productivity growth (the basis for all increases in living standards) at about double the rate that it contributes to employment. It also produces spillover effects that are frequently not captured in data about manufacturing.
Manufacturing adds economic value, much of which is transferred to consumers in the form of lower prices (which are economically indistinguishable from a pay increase). On a value-added basis, manufacturing represents about 16% of global GDP, but accounted for 20% of the growth of global GDP in the first decade of this century.
Finally, manufacturing accounts for 77% of private sector R&D, which drives a huge share of technology innovation. It is far from clear that Silicon Valley would exist without it.3. Our future is in knowledge-intensive services, not manufacturing.
Once again, our traditional categories are not helpful. Manufacturing frequently is a knowledge intensive business. (It surprises many people to learn, for example, there are more dollars of information than dollars of labor in a ton of US made steel).
Manufacturing is increasingly data intensive. Big Data is revolutionizing manufacturing products and processes, no less than services. Data enables manufacturers to target products to very specific markets. The "Internet of Things" relies on sensors, social data, and intelligent devices to rapidly inform how products are designed, built, and used. Huge data sets have also enabled new ways for manufacturers to gather customer insights, optimize inventory, price accurately, and manage supply chains.
This is not your father's factory. Most US manufacturing jobs are not even in production. As the accompanying chart shows, they are service jobs linked to manufacturing or inside manufacturing companies.4. Manufacturing depends on low cost labor, which is why it has fled overseas.
This particular conceit is endemic in Palo Alto. McKinsey documents one possible reason: no sector, not even textiles, has shifted production overseas as fast as computers and electronics. Indeed, as the chart at the right illustrates, some manufacturing sectors have actually added jobs during the past ten years.
There is a second dimension to this myth however: that manufacturing jobs are factory jobs. As illustrated above, many jobs in manufacturing companies are service like jobs, including R&D, procurement, distribution, sales and marketing, post sales service, back office support, and management. These jobs make up between 30 and 55 percent of manufacturing employment in the US. Much of the work of manufacturing does not involve direct product fabrication, assembly, warehousing, or transportation.
The final misunderstanding is that most factory jobs are unskilled or low paying. In fact, manufacturers world wide are currently experiencing chronic skill shortages. McKinsey projects a potential shortage of more than 40 million high skilled workers around the world by 2020 -- especially in China.In short, the standard Silicon Valley view is much too narrow: manufacturing is and will remain a high value industry that contributes meaningfully to our standard of living. Manufacturing (some of it, anyway), is a competitive asset. Move east to Pittsburgh, and you will quickly discover that a completely different manufacturing mythology prevails, focused mainly on job-creation. In these parts, the loss of manufacturing jobs is understandably considered a crisis for the US. Politicians pay homage to "good-paying manufacturing jobs" and blame the inability of a high school grad to get a factory job that supports a family, a home, and a motorboat on cheatin' Chinese and union-bustin' outsourcers. Dig a bit deeper, and you will discover that these beliefs are also grounded in economic myths, such as: 1. Manufacturing jobs pay more than service sector jobs.
This view often reflects the wishes of people with a history in "rust belt" manufacturing. In fact, manufacturing jobs pay very much like service jobs do -- except at the very low end, where manufacturing creates far fewer minimum wage service jobs that are common in hospitality and retail.
Part of the reason for this of course, is that manufacturers can have low value work performed overseas -- not an option for McDonalds, Walmart, or others who deliver services face-to-face.
As shown on the right, manufacturing creates about the same number of jobs in each pay band as do service sector jobs, except that there are fewer low-paying jobs and a few more high paying ones. An important caveat is that manufacturing company jobs may be more likely to include benefits, which are excluded from this calculation.
That all said, it is no longer given that manufacturing is a source of better paying jobs.2. We should look to manufacturing for the jobs we need.
OK, but at least manufacturing creates decent jobs. Why not promote manufacturing to create jobs -- even if they pay the same as service sector jobs?
The answer depends on your country's stage of development, on domestic demand for manufactured goods, and on how robust your service sector is. For the US, the case for public policies favoring manufacturing is weak.
McKinsey documents what many have observed: manufacturing jobs decline once a country reaches about $7-10,000 GDP/person, as illustrated on the right. This pattern holds both across and within countries. As a result, manufacturing jobs are declining everywhere, except in the very poorest countries (even China is losing manufacturing jobs).
But all low cost labor countries do not enjoy equivalent manufacturing sectors. More important even than stage of development is the level of domestic demand for manufactured goods and the robustness of the domestic service sector. The US and the UK have such large service sectors that we derive smaller share of our GDP from manufacturing, even though in absolute terms both countries have robust manufacturing sectors.3. Low wage nations like China are stealing our manufacturing jobs.
There are typically two parts to the belief that US jobs are flowing overseas. First is the underlying view that jobs are a zero-sum asset to be fought over like territory. This idea has political salience, but is economic nonsense. Jobs are the complex result of many things including the availability of public or private capital, legal and regulatory systems, local demand conditions, and managerial competence. Cheap Chinese labor is typically the least of it.
The other idea however, is that we can somehow return to 1950 when unionized manufacturing jobs dominated the US economy. This is no more likely than a return to small family farming (and like those who romanticize what Marx aptly termed "the isolation of rural life", those who idealize factory work often have suspiciously clean fingernails).
As the accompanying chart shows, manufacturing as a share of economic activity is in long term secular decline in all high and middle income countries worldwide -- including China. It is only growing as a share of the economy in very poor countries. As the UN has pointed out, Haiti is in desperate need of sweatshops. Vietnam and Burma are growing manufacturing's share of economic output -- often at China's expense.
Manufacturing matters enormously, just like agriculture does. But it is not growing as a share of economic output. (McKinsey highlights one interesting exception to this rule. Sweden has maintained manufacturing as a share of its economy by targeting high growth sectors and especially by investing twice as much in training as other EU countries. Most importantly however, they devalued the krona against the Euro to make exports competitive -- effectively taxing imports).4. Companies build plants overseas in search of cheap labor
There was a time when labor costs were a determining factor in locating production facilities. This is much less true today, when location decisions are driven by many factors other than labor costs, as the chart on the right illustrates.
Depending on how a company competes and whether it is locating research, development, process development, or production facilities it's location criteria may or may not turn on factor costs such as labor. Proximity to consumers or to talent may matter more. In some cases taxes matter. In other cases access to suppliers matter.
The rising cost of commodity inputs transportation during the past two decades has altered this calculation. Steel, for example, was about 8% iron ore cost and 81% production costs as recently as 1995. Today ore is more than 40% of the cost of a ton of steel and production costs are only 26%. Steel companies care much more about the cost of ore than the cost of labor.
Likewise transportation costs have skyrocketed with energy prices and infrastructure demands (the US grows highway use by about 3%/year and grows highways by about 1%/year. Anyone living here knows the result). Producers from P&G to Ikea and Emerson now are forced to locate plants near customers to minimize transportation costs. As a strategy for plant location decisions, labor arbitrage looks very 1980s.5. If consumers would only buy local, we could restore our manufacturing base.
Politicians and union leaders say this all the time and it is sheer idiocy. Most would not be caught dead in my German car, which was designed and made in Tennessee, but beam proudly at the sight of a Buick van imported from China.
High productivity manufacturing benefits consumers, as companies pass on savings to Americans in the form of lower product costs. As illustrated by the chart to the right, most consumer durables cost today about what they did in the 1980s -- and quality is much higher. Economists have estimated that Walmart, Target, and Costco reduce retail prices by 1-3% each year because they pass to consumers savings extracted from manufacturers (this, by the way, is a big reason that manufacturing continues to shrink as a share of our economy. We pay less for our stuff and more for services like education and health care.)
Americans say we believe in "Made in USA" campaigns, but as consumers, we are famously delusional. When surveyed, we profess to favor locally produced merchandize. But our wallets don't lie: we buy high quality, low cost stuff regardless of where it comes from.So how do we grow US manufacturing? Same as always: by creating innovative materials, processes, and products. McKinsey sees "a robust pipeline of technological innovations that suggest that this trend will continue to fuel productivity and growth in the coming decades". Of course most innovations are hard to foresee. One reliable source of innovation turns out to be anything that reduces weight, such as nanomaterials, some biotech, light weight steels, aluminum, and carbon fiber. It turns out although we buy more stuff each year, the total weight of our purchases actually declines because nearly everything we buy, including cars and airplanes, weighs less than it used to. Manufacturers have come to appreciate the power and the necessity of innovation. During the Clinton administration debates over CAFE standards, car company engineers soberly advised us that the theoretical limit of internal combustion engines was a 10-15% improvement over the current average of 17 miles per gallon. Today these companies have already doubled that efficiency and speak openly about doubling it again, even as they invest in non-combustion solutions that are even more efficient. In short, manufacturing matters for different reasons than it used to. It used to be a plentiful source of unskilled jobs; today its value is as driver of innovation, productivity improvement, and consumer value. It's an exciting part of the economy, even if it cannot solve every problem we face related to job creation and economic growth. Apple post Steve is an incremental innovator, not a disruptive one. Most of the changes are the result of a new operating system, not new hardware. But one feature is blowing me away -- totally changing how I use my phone. The new feature is keyboard dictation, which appears on all iOS 6 keyboards, whether you have the new iPhone or not. By dictation, I emphatically do not mean Siri. Siri is a dog that performs a few well-chosen show tricks and inspired at least one hysterical advertising spoof. Siri is very useful for directions, reminders, OpenTable reservations, and a good laugh. Siri entertains -- but dictation delights. Dictation has been around for a decade and on iOS since the 4S and third generation iPad, but it was always more trouble than it was worth. But suddenly, dictation not only works, it works shockingly well. For text messages, emails, tweets, and even first drafts of longer documents it is massively faster to dictate than to type (unfortunately I still need to type blog posts the old way. Maybe that explains the 60 day hiatus...). I have a hard time understanding why Apple is not using its ad dollars to promote dictation, not Siri -- unless the processing costs are huge and they are losing money on the feature. What changed? In a word, Nuance plus a massive investment in cloud infrastructure. Nuance Communications is the public company behind Dragon Dictate -- which has been the market leader in desktop speech recognition for the past 15 years at least (the company was founded in 1992 out of SRI as Visoneer, known mainly for early OCR software). Neither Apple nor Nuance talk about it, but it looks to many people like Apple has licensed its dictation software, including Siri's front end interpreter, from Nuance. One sign: before Apple bought Siri, it used to carry a "speech recognition by Dragon" label (earlier, Siri had used Vlingo, which apparently did not work as well). Not only that, but Nuance has built several speech recognition apps for the iPhone and iPad that work exactly like the speech recognition built into the iPad and iPhone 5. This is interesting in part because Apple never licenses critical technology for long. It insists on controlling its core technology from soup to nuts, so many people assume that Apple has considered buying Nuance. The problem is that Nuance holds licenses with many Apple competitors who would disappear if Apple bought the company. Apple would need to massively overpay for the asset -- something they never do. More likely, Apple will hire talented speech recognition people and build its own proprietary competing product, just like it did with maps when it declared independence from Google. In this case, figure that dictation will regress for a year or two, just as maps have done, because real time, accurate speech recognition makes maps look simple. Plus Nuance protects its patents aggressively and these patents are, according to some writers, not easy to avoid. Although Google is avoiding them nicely; Android speech recognition is also outstanding. How do they do it? The Google way: throw talent at it. Google hired more PhD linguists than any other company and then they hired Mike Cohen. Cohen is an original co-founder of Nuance and if anyone can build voice recognition without tripping on the Nuance patents, he can. Apple appears likely to pursue a similar course. Mobile dictation works by capturing your words, compressing them into a wave file, sending it to a cloud server, processing it using Nuance software, converting it to text, and sending it back to your device where it appears on your screen. Like all good advanced technology, it passes Arthur Clark's third law: it is indistinguishable from magic. The tricky bit is the software processing, which has to have a rich set of rules based on context. The software decides on the meaning of each word based not only on the sound pattern, but on the words it heard before and after the word it is deciding upon. This is highly recursive logic and nontrivial to execute real time. Try saying "I went to the capital to see the Capitol", "I picked a flower and bought some flour", or "I wore new clothes as I closed the door" and you begin to understand the problem that vexes not only software, but English learners everywhere. Apple dictation handles these ambiguities perfectly -- meaning that it either gets the answer right, or it realizes that there are multiple possible answers, takes a guess, and hovers the alternative so that you can correct it with a quick touch. It takes a little bit of practice to use dictation well. It helps to enunciate like a fifth grade English teacher and to learn how to embed punctuation. The iPhone OS6 User Guide has a list of available commands. Four are all you need: "comma", "period","Question mark", and "New Paragraph" (or "Next Paragraph"). You can also insert emoticons "smiley" :-), "frowny" and "winky" ;-). For anything else, speaking the punctuation usually works: "exclamation point", "all caps", "no caps", "dash", "semicolon", "dollar sign", "copyright sign", "quote", etc. Overall, the experience of accurate mobile dictation is a magic moment -- like the first time you use a word processor or a spreadsheet (for those who recall typewriters and calculators), or the first browser or email (yeah, we didn't used to have those, either). Give it a try. Apple has done something amazing and for once, actually under-hyped it. When George McGovern ran for president, I was the age the JamKid is now. He was then, and remained, a remarkable and vastly underappreciated American. He was a decorated war hero who had seen gruesome combat and calmly led a massive crusade against the Vietnam war. He was a professor with a PhD in History who would never have dreamed of calling himself "Dr. McGovern" (unlike say, Germany, where most politicians are eager to run as Herr Docktor). He was a democratic Democrat, whose commission reset the party rules and stripped the insiders of much of their power. More than anyone, McGovern closed the smoke-filled rooms (and frankly, made the Party more difficult to govern and more dependent on large donors). He created the UN Food for Peace program and believed profoundly in helping the poor and desperate, even in the face of evidence that foreign aid did not promote economic self-sufficiency. He was an early proponent of dietary guidelines and as early as 1973 warned of the growing amount of sugar in the US diet. I met McGovern a few times and had dinner with him once. He was a modest, self-effacing guy, who knew a surprising amount about labor history. I learned that his dissertation was on the 1913 Colorado coal strikes. He also knew a lot about farming, not simply because he was from South Dakota, but because he had a lifelong aversion to hunger after seeing Italians starving during his wartime service. I learned that he was probably the last person to ever speak to Bobby Kennedy -- whose assassination shook him, and me, even more deeply than the loss of JFK. McGovern was widely reviled. During his run for the presidency in 1972, the New York Post referred to him as "George S. (for surrender) McGovern" in virtually everything it wrote. He was not a great campaigner, although he brought hundreds of people into politics and many of them stayed -- including Bill Clinton. His hastily-considered choice of Missouri Senator Tom Eagleton as his running mate ranks with McCain's choice of Sara Palin in textbook examples of disastrously poor vetting. Despite an Obama-like grassroots campaign led by campaign manager and future Senator Gary Hart, McGovern lost 49 states to Richard Nixon, the worst landslide in modern US history. Although he later joked that "for many years, I wanted to run for the Presidency in the worst possible way and last year, I did", it had to hurt to lose an election to a man he knew to be deeply dishonest and corrupt. In later years, the former minister, professor, Congressman, global food program director, Senator, and presidential candidate ran a 150 bed inn in Stratford, Connecticut. After the business went bankrupt, he reflected often and publicly on the role of government regulations and lawsuits in constraining small business. At one point, he surprised conservatives when he wrote in the Wall St. Journal that "I ... wish that during the years I was in public office I had had this firsthand experience about the difficulties business people face every day. That knowledge would have made me a better U.S. senator and a more understanding presidential contender." Part of why politics in the US works is that people as courageous and talented as George McGovern are drawn to public service. I worry that this is becoming less true. In part due to reforms McGovern championed, parties are weaker and the path to public office now less dependent on political parties and more dependent on large financial backers than ever before. We are at risk of drawing more light than heat to the national stage. It will be ironic and unfortunate if the result of George McGovern's wonderful career is that we see fewer like him in the future. In 1997, I had an idea: if I could aggregate millions of used, rare, and out of print books from around the world on a single website, I could enable people to find and buy books that were otherwise impossible to locate. Like hundreds of others with similar ideas for selling things online, I started an ecommerce company. As Atlantic writer Derek Thompson points out, that was the year that the US enjoyed an odd service sector convergence: 14 million Americans worked in retail, 14 million in health and education, and 14 million in professional & business services. Fifteen years later, the landscape has changed. "Books You Thought You'd Never Find" is a silly idea. Book retailers are dying. The company I founded has made an impressive effort to transition from retail to services. The employment picture reflects these changes. Health care jobs have grown by almost 50%, professional/business services grew almost 30%, but, as the chart below illustrates, retail grew less than 3%, adding only 26,000 jobs a year. There is mounting evidence that retail employment is about to now decline sharply. Fifteen years from now, these may be the good old days for brick and mortar stores. Retail revolutions are nothing new. Boutiques challenged general stores throughout the nineteenth century. Department stores, arose starting with Wanamaker's in 1896 and challenged boutiques. Starting in the 1920s, car-friendly strip malls challenged main streets. In 1962, Walmart, Target, Kmart, and Kohls each opened their first store and initiated the era of big box retail. In 1995, Jeff Bezos incorporated Cadabra -- but changed the name to Amazon at the last minute, in part because it started with an "A" and most internet search results were alphabetical. Today, e-commerce is not just killing some stores -- it is killing almost all stores. Today there are very few successful brick and mortar retailers. Consider the obvious losers in recent years -- none much lamented.
- Department stores are dying. Sears, like K-Mart before it, is struggling to survive. Credit default swaps that insure investors against a Sears default on its debt obligations trading at record highs because markets believe that it won't make it. JC Penny is undergoing a major makeover under the leadership of Ron Johnson, who created the Apple Stores. Johnson has changed the look, the targeted demographic, the colors, the brands, the formats, the shopping experience, and the name of the stores (now JCP). The results have impressed nobody, least of all customers.
- Specialty retailers are not exempt from the onslaught. Online retail is relentlessly taking share in many specialty retail categories, resulting in total dollars available to physical retailers stagnating or even declining. This is starting to put intense pressure on their top lines. According to comScore, online retail rose to a record $35.3 billion in during the last holiday season — a 15 percent increase from the previous year.
- Media retailers are dead or dying. Music stores are a fading memory, Borders is gone, Blockbuster, Barnes and Noble and your "independent" bookstore are all next (Barnes & Noble admitted as much when it spun off the Nook reader as a separate entity rather than tie its fate to dying retail stores. Media retailers are dying because of digitization of course -- but Amazon offers better prices and more selection even on printed books, CDs, or DVDs.
- Big box "category killers" like Best Buy, or Staples are being hammered by online sites that translate dramatically lower debt and operating costs into lower prices. Best Buy posted five straight quarters of profit decline before reporting a $2.6 billion loss on March 29.
- Big box general retailers like Wal-Mart, Target, and Costco are seeing declining sales. Until its third fiscal quarter last year, Wal-Mart had posted eight consecutive quarters of declining sales at stores open more than 12 months. Analysts forecast declining same-store sales and profit for Target this year.
Dear Newly Appointed Berkeley Chancellor:Congratulations! Even though as of this writing, you have not yet been named, you take over the leadership of UC Berkeley at a critical time. At the end of your tenure, the world's premier public university will either have found a sustainable path forward or will have entered a period of long-term decline. Do us a favor -- do not screw this up. You arrive at a moment when higher education is in wonderful and overdue ferment. Online education is challenging your traditional business model and unsustainable tuition increases. Badges and other alternative credentials threaten your historic right to certify talent. Most of all, Berkeley like other public universities that serve as engines of knowledge-creation and social mobility are under unprecedented financial pressure. Berkeley, in particular, has a lot at stake. It is, as I noted here, an amazing public institution, despite its bottomless capacity for self-parody (as you know, my wife is a dean at Cal). 48 out of 52 Berkeley doctoral programs rank in the top 10 of their fields nationally -- the highest share of any university in the world. By any measure: NSF Graduate Research Fellowships (#1), National Academy of Sciences members on the faculty (#2 behind Harvard), members of the National Academy of Engineering (#2 behind MIT), membership in the American Philosophical Society, the American Academy of Arts and Sciences, or winners of National Medal of Science -- Berkeley excels. It is by a considerable distance earth's finest public university. And it serves a public mission. Berkeley’s single proudest claim, ahead even of its 24 national rugby championships, is that it enrolls more students on Pell Grants than all of the Ivy League schools put together. A Pell Grant is a scholarship based on financial need. By serving academically qualified students on Pell Grants, Berkeley ensures that smart, hard-working kids from low income families access to a top-flight education. You may regret the flow of private funds into a public university, but you cannot and should not try to prevent it. Actually, you will devote a great deal of time to encouraging private donations so that Berkeley can remain accessible to middle income students who are not eligible for Pell Grants. This requires building organizational muscles that atrophied when Berkeley, like most public universities, avoided the intellectually distasteful but indispensable work of raising private funds. Berkeley is still building the endowment required to sustain these efforts. The endowment matters because over time, money buys quality -- just ask Stanford. It is no accident that although many state universities undertake serious research and offer outstanding educations, only three of the "Public Ivies", Texas, Michigan, and California, make the list of America's best-endowed universities. You realize, of course, that the endowment data shown here are misleading in one important respect: the University of California is less a university than a federation of ten highly autonomous campuses ranging from prestigious broad spectrum research institutions like Berkeley, UCLA, and San Francisco to campuses with pockets of excellence like San Diego, Irvine, and Davis, to schools like Riverside and Merced that are not easily distinguished from the State University. These data also illustrate why the Regents hired your boss. Of the great public universities, only the University of Texas took endowment-building seriously, making former UT President Mark Yudoff irresistible as the current president of UC. But Berkeley, along with San Francisco and UCLA, has begun to focus on endowment building. It is no surprise that taken together, these three campuses now hold three-quarters of all UC endowment funds. Faced with the choice of compromising academic excellence, raising tuition to levels that reduce access to higher education for many students, or undertaking a covert privatization to maintain the finances of their institution, all three of these schools have raised tuition and quietly sought private funds. Your job is to continue this course. Soft privatization is not without its management challenges however, especially at Berkeley. First, Cal is a state-owned enterprise. The barely functional California government retains full and largely unwelcome control over your budget and governance, even though it contributes less each year to your operating revenue. Second, your boss happily taxes richer campuses like yours to support poorer ones, so to raise a dollar of endowment, you will often have to attract more than a dollar of donations. Third, strong faculty governance provisions, while occasionally improving decision quality, mostly serve to protect the comfortably and correctly tenured and prevent needed program rationalization. Your biggest risk is not privatization -- it is paralysis. To get Berkeley fully upright and sailing, you need to mend both a broken income statement (you lose money every year and must stop the bleeding, even if the state makes good on a new round of cuts) and a broken balance sheet (your endowment may be larger than other UC campuses, but it is still pathetic. Look at the data.) Nothing else you do will matter unless you set audacious goals to fix your core economics. I respectfully suggest two:
- Raise the endowment to $10 billion. This is preposterous, but probably achievable over 7-10 years. It requires that you aggressively take the case for Berkeley to the public, to alumni, and to Silicon Valley. It is an incredibly compelling case, but you need to personify it and champion it consistently, loudly, and effectively. The impressive "Campaign for Berkeley" is a great start that is committed to adding $3 billion to the endowment by 2013. This will be no time to stop. Committing the campus to achieving a $10 billion endowment would ensure access to undergraduate education for all students, the ability to attract and retain outstanding faculty, and a credible case for increased operating autonomy. (And really -- if Texas can do it, how hard can it be?)
- Move to a three semester academic year. You waste a beautiful, expensive campus by using it fully for only 30 weeks each year. Adding a third semester to the academic year earns you $150 million annually, relieves a great deal of pressure to raise tuition, and dwarfs all other growth or cost saving opportunities even if you eliminate Summer Session and make no changes to faculty teaching loads. The move would be rapidly copied by other UC campuses and other top flight universities. This represents low risk, no-brainer money because you fully control both the revenue and costs of the initiative. It would enable you to admit more students, significantly increase the size of your faculty (which won't happen any other way) and offer each professor an option to earn additional income by teaching the third semester or use it for research, as many do now. You are not dumbing down any degree: students would do just as much work, but those who wish to complete an eight semester undergraduate degree in 2.6 years instead of four could do so. Relative to budget cuts or tuition increases, improving the utilization of your largest and most expensive asset is much less painful or politically controversial than any other economic opportunity you face.
Publishing is not evolving. Publishing is going away. Because the word “publishing” means a cadre of professionals who are taking on the incredible difficulty and complexity and expense of making something public. That’s not a job anymore. That’s a button. There’s a button that says “publish,” and when you press it, it’s done.Amazon has demonstrated a much greater ability than Apple to observe, orient, decide, and act to dominate the eBook market. This is the second sign of peak Apple in as many weeks and another indication that Jeff Bezos has taken over from Steve Jobs as the reigning strategist of the technology world. That said, eBooks is not the most important market where these two companies will go head to head. That would be payments, because nobody else has 100 million credit cards on file. Bezos should think very hard about this one. Apple owns a big piece of mobile and has the moves to be on his tail with guns blazing in about 40 seconds. Lenin famously bragged that "Capitalists will sell us the rope with which we will hang them." It would surely gall him to learn that the art of destroying capitalists with their own products has been mastered not by a militant, vanguard-led proletariat but by entrepreneurial capitalists. It appears that even universities, finally, are getting the hang of it and learning to sow the seeds of their own destruction. As an earlier post detailed, universities rarely go out of business. This is thanks to the magic of a three part lock that secures their position and protects them from institutional challenge. For centuries, universities have enjoyed the exclusive right to allocate valuable social capital.
- Select talent. There is no evidence at all that Stanford, Harvard, or Berkeley do a better job of training undergraduates than Ohio State, Texas A&M, or the University of Florida. But they select far stronger students. If colleges were assigned students randomly, the value of "elite" degrees would plummet overnight. Harvard delivers 90% of its value the day it admits a student, although the market recognizes the value only when the student graduates. In a previous post, I described an experiment I once proposed to compare students admitted to Harvard Business School who attended with those admitted who did not attend. Others have since confirmed what we all know: Berkeley selects strong students, it does not create them. You aren't smart because you went to Berkeley; you went to Berkeley because you were a certain kind of smart.
- Credential talent. College degrees confer professional access and mobility. Since mobility is "path dependent" (your current options are constrained by past decisions, even if past circumstances are no longer relevant), it matters enormously what choices a credential opens up for you. Take it from a factory worker who went to Harvard Business School.
- Signal social standing. Signaling is a cousin of credentialing. A credential is a specific signal to the labor market that a person completed a course of study and mastered a body of knowledge. But it is relevant mainly early in a career. The broader social and economic signal conferred by a university degree extends well beyond the time when the details of the course work are forgotten. An honors degree from the University of Maryland confers standing, especially in Baltimore, that extends well beyond the knowledge gained from a degree in European History. There are very few signals of social standing as powerful as a college degree, even though very little evidence suggests that this should be the case. Powerful alumni affiliations reinforce this effect.
- Granular. Employers care what you can do; they care relatively little about what you study, except as an indicator of what you can probably do. Badges are likely to reflect specific skills ("architecting social media databases" or "PHP"). Some may complement licensure ("palliative care nursing") others may document skills in areas where little certification is available today ("Thai cooking" or "cloud-based SQL database administration").
- Open. To work, badges need an approval process and an ontology that reflects a hierarchy of skills. An licensed vocational nurse may be able to earn a badge in discontinuing intravenous drips, but I'd prefer that the Thai cook obtain his or her LVN certification before tackling this skill. Once these structures and privacy controls are established, the technology for making badges machine readable, searchable, embeddable, and portable is relatively trivial.
- Able to evolve. The structure of badges itself needs to be open. Today "Thai Cooking" may be a sensible badge. Tomorrow it may be "Kitchen safety and peppers" (I worked with a cook who accidentally sent 50 diners choking and gasping out the door, hospitalizing two of them for lack of this knowledge). Badges that are ten years old will frequently fade in value as others rise. Badges create a market in skill certification -- precisely what should replace university degrees.
- Cumulative. A single badge may or may not signal a great deal, but a sash full of badges accumulated over many years of effort makes you an Eagle Scout. Employers are very likely to value particular combinations of badges for specific jobs. Today, resumes or transcripts do a notoriously poor job of communicating these capabilities.
- Essential to reputation markets. Badges form core elements of emerging reputation marketplaces, where professionals collect, curate, and disseminate information that reflects their professional skills and achievements much as Fair-Isaac today distributes information about your credit history. For some positions (VP Marketing for a startup, for example), leadership history may matter more than a documented set of specific skills, but badges will still contribute to the overall picture.
- It has since been downloaded 20 million+ times. It is the #1 app in 79 countries. It has 12m daily users and generates $100,000 of revenue daily for a small team in New York.
- Users are highly engaged. They drew 3 drawings per second on Feb 12. Two weeks later, they drew 100 times that: 333 drawings per second. 10 days later, it was up to 3,000 drawings per second. Users bring in other users, who bring in even more users. This is what viral growth now looks like on global, Internet scale -- and stories like this are about to become fairly common.
- The resulting growth rate is unhinged. It tool AOL nine years to acquire 1 million users. It took Facebook nine months to earn its first million users. Draw Something did it in nine days.