Sunday, December 16, 2012

Some suggestions for improving the gun background check system

The U.S. is still reeling over the mass murders in Newtown, Connecticut. Along with the expressions of horror and disgust over the murder of 20 children, all of whom were ages six or seven, and all of whom were shot multiple times, are calls to do something about the epidemic of gun violence. There have been 61 mass shootings since the Columbine High School massacre, but they're the tip of the iceberg. The U.N. Office on Drugs and Crime reports that there were 9,146 gun-related homicides in the U.S. in 2011, and according to the Centers for Disease Control, there were 18,735 gun-related suicides in the U.S. in 2009.

Gun regulation has become a "third rail" political issue in the U.S., but the Newtown massacre may get President Obama and the U.S. Congress to finally do something about the rate of gun violence. According to The New York Times, the U.S. Justice Department drew up recommendations for improving the mandatory automatic background check system after the Tucson shooting of U.S. Representative Gabrielle Giffords and the murder of Federal Judge John Roll and others. However, the plans were abandoned in the wake of the "Fast and Furious" gun "walking" scandal.

The recommendations, which are only sketched out in the Times article, would likely help keep people who represent risks to themselves or others from getting guns, but they could do much more:
  • Currently, states are encouraged, but not required, to provide information on criminal records and involuntary admissions to mental hospitals to the Federal background check system. Some states are years behind in their record submissions, and others don't submit information to the background check system at all. The Justice Department's proposal offers money to the states in order to get them to participate. I'd go one step further and require that the states participate and keep their records current. Those that don't would be subject to having their Federal funding for state and local police departments, prison systems and homeland security suspended.
  • The Federal system lets states determine how to define who they report as mentally "defective," but I propose requiring states to report both voluntary and involuntary admissions to mental hospitals. Any admission to a mental hospital would result in a lifetime ban on purchasing firearms or ammunition.
  • There are private databases that provide extensive information to insurance companies about individuals' medical history. These databases are used by insurance companies to help determine whether or not to provide medical coverage or issue life insurance. My suggestion is to add these databases to the state and Federal submissions, and to deny firearm and ammo purchases to anyone who has been treated for a variety of mental illnesses, including a ban on purchases for five years from the most recent filling of a prescription for an antidepressant, mood stabilizer or antipsychotic medication. The insurance database could simply respond back with a "go" or "no-go" in order to protect the privacy of applicants.
I recognize that this wouldn't have prevented the Newtown massacre, because the shooter got his weapons and ammunition from his mother (but, on the other hand, what was an elementary school teacher in a safe town doing with two semiautomatic handguns and an assault rifle? Correction, December 16, 2012: Early reports that said that the shooter's mother was a teacher at Sandy Hook Elementary School are incorrect. She home schooled the shooter for a few years, but was not a teacher by training.) However, it likely would have prevented the Tucson massacre and many others. In addition, it would cut down on suicides by gun, and help to keep guns out of the hands of people with impulse control problems. And, it would keep medical professionals from having to make judgment calls about whether or not to report patients to authorities.

This approach wouldn't affect the kind of guns, ammunition or magazines available to law-abiding citizens, but it would help to keep guns out of the hands of the people most likely to use them to hurt themselves or others--known criminals and people with mental conditions linked to violence. If the argument "guns don't kill people, people kill people" is true, let's get guns out of the hands of the people who are most likely to kill themselves or others.

Enhanced by Zemanta

Wednesday, December 12, 2012

The silly argument over 48 frames

You'd have to be living under the proverbial rock not to know that Peter Jackson's "The Hobbit" will reach theaters this week. Jackson shot the movie at 48 frames per second, twice the usual 24 fps rate used for theatrical movies, using RED video cameras. Jackson says that the 48 fps rate provides "hugely enhanced clarity and smoothness", but many reviewers are saying that the effect "looks much like video." As well it should, because it is video.

For decades, the standard television frame rate in Europe has been 50 fps, to coincide with the continent's 50 Hz AC power frequency. In the U.S., the standard frame rate is 60 fps (actually 59.94 fps), again corresponding with the country's 60 Hz AC power frequency. (Yes, in analog television, those were field rates, and the frame rates were one-half the field rates, but with HD, we commonly use 50 and 59.94 progressive frame rates.) 48 fps is so close to 50 fps that, from a perceptual standpoint, there's no difference. The kinds of visual artifacts that reviewers are complaining about are much the same as any viewer from the U.S. commonly sees when they watch television in Europe--the image seems to "stutter", especially in scenes with lots of motion or with quick pans. After watching for a while, however, the eye gets used to the slower frame rate. That's exactly what's happening with "The Hobbit"--most people, even motion picture professionals, say that it takes about an hour for their eyes to get used to 48 fps. Of course, the problem with adjusting to 48 fps is exacerbated by 3D, which causes headaches and even nausea in some viewers.

48 fps is no miracle and no big advancement in cinema technology. For all intents and purposes, it's a video frame rate, shot with a video camera. No one should be surprised that "The Hobbit" looks like video. Jackson himself admits that the high frame rate issue was explored decades ago when Douglas Trumbull and his partners developed a 60 fps, 65mm film format called Showscan, which never took off because of its high costs. Back in the late 1970s and early 1980s, there was an enormous difference in the resolution of film and video, so there was no way that 65mm film shot at 60 fps could be mistaken for video. Today, however, RED and other cameras have 4K resolution--close enough to film that the differences are subjective rather than obvious.

If Jackson had shot "The Hobbit" at 50 fps, many people would have complained that he shot it on video. But, he did shoot it on video, at 48 fps instead of 50 fps. What's the difference?
Enhanced by Zemanta

Monday, December 10, 2012

The Internet video battle is the wrong fight in the wrong venue

Last week, over-the-top Internet video company Aereo faced the major U.S. television networks in a Federal court of appeals in New York. Last July, a Federal court denied the broadcasters an emergency injunction to stop Aereo from offering its service, which enables consumers in New York City to watch, record and replay live broadcast television over the Internet. Aereo assigns a tiny, thumbnail-sized antenna to each active user, specifically to circumvent objections that resulted in court injunctions against ivi and FilmOn, two similar services that preceded Aereo. The appeals court hasn't made its ruling as of this writing, but based on court arguments, it looks like the appellate court will be less sympathetic to Aereo's arguments than was U.S. District Judge Alison Nathan.

In my opinion, both sides are fighting over the wrong issue, in the wrong venue. Aereo, and both ivi and FilmOn before it, took the approaches that they did because broadcasters and cable operators either refused to negotiate with them for rights to their content, or demanded fees that they couldn't possibly pay. There are conflicts in current laws that bring into question whether broadcasters must license their content to cable operators under what's termed a compulsory license. However, as the laws are generally interpreted, broadcasters can either make their content available to cable operators for free (in which case the cable operators must assign the broadcasters a channel,) or the broadcasters can ask for compensation for their content, in the form of payment and/or an agreement to carry other content from the broadcasters' parent companies (for example, CBS could require a cable operator to offer Showtime, which it owns, in order to get the right to broadcast its local television station(s).) Broadcasters can withhold their content from any cable operator that doesn't agree to their terms.

The real issue is whether broadcasters, if not cable networks, should be required to license their content under fair, reasonable and non-discriminatory (FRAND) terms to all distributors. I think that it's time for such a requirement. The rules that define who can be considered a Multichannel Video Programming Distributor (MVPD) were written before the Internet became a viable medium for distributing live video. There's no technical reason why Internet video companies can't compete with cable, satellite and IPTV operators, but very few broadcasters, and even fewer cable networks, are willing to sell them their programming.

Here's an example of a FRAND compulsory licensing scheme that could work: Over-the-top Internet services could license content from broadcasters on a tiered pricing scheme based on each service's number of active subscribers--for example, companies with 1-249,000 subscribers would pay a given per-subscriber fee for each broadcast station, and additional tiers with higher fees would be established at 250,000-499,999, 500,000-749,999 and 750,000-999,999 subscribers. Once a video service reaches one million subscribers, it would be subject to the same rules as cable, satellite and IPTV companies. For their part, cable, satellite and IPTV operators would also be eligible for the same FRAND compulsory licenses, at the same rates, until they too reach the one million subscriber mark. According to the most recent statistics from the National Cable Television Association, that would make all but the top 12 MVPD companies in the U.S. eligible for compulsory licenses. Finally, broadcasters could make their programming available to Internet services for free, under the same "must-carry" rules that apply to cable, satellite and IPTV services.

This approach would enable innovative Internet video startups to gain a foothold and compete against larger cable, satellite and IPTV companies, and it would allow smaller legacy MVPDs to compete on a level playing field. I'd also propose that cable networks that are owned by MVPDs (such as NBCUniversal, which is owned by Comcast) be required to follow the same FRAND compulsory licensing rules. Other cable networks could choose to make their programming available to smaller MVPDs, including Internet companies, under FRAND licenses.

The courts can't implement a FRAND compulsory licensing scheme; it has to be done by the U.S. Congress, in conjunction with the U.S. Copyright Office. No court ruling in the Aereo case, even if it goes all the way to the U.S. Supreme Court, will fully resolve the case--if Aereo wins, broadcasters will push for legislation, and if the broadcasters win, Aereo and its allies will do the same. It's time to recognize that the public Internet works for live video distribution, that startups should be able to compete with existing cable, satellite and IPTV companies, and that content providers should get fair compensation, no matter how their content is distributed.
Enhanced by Zemanta

Saturday, December 08, 2012

The Zero Option

In business, as in life, we often keep doing things because we've been doing them for a long time. It's habit and past practice converted into a self-fulfilling rationale for why things shouldn't change. "We've always done that." "We can't stop doing that." The result is that we keep doing things that we should have stopped doing years ago. This mindset is probably responsible for the failure of more businesses than anything else, but we can't get rid of the mindset, so we keep repeating the mistake.

I suggest a new approach called "The Zero Option." Take something that you're doing--even if it's still working, even if it's still profitable--and stop doing it. Do something else. Here's a few examples from media companies:
  • How often have you seen an ad on television for a movie that's bad--obviously bad, just from the commercial--but it's still booked into theaters for national distribution? I'm not talking about bad movies that will still make a lot of money, like the "Transformer" series, or a blockbuster bomb like "John Carter" where there's so much money at stake that the studio can't afford to walk away from it. I'm talking about movies in the $50 million dollar and below budget range that tested poorly and might have gotten mediocre or bad reviews from early showings at film festivals. These are often the movies that have their titles plastered at the top of the screen throughout their television commercials, as if seeing the title for a full 30 seconds will make a bad movie more desirable.

    Instead of spending a few tens of millions of additional dollars to promote a movie that's an odds-on turkey, why not kill it? Write it off. Sell it (even at a loss) to another distributor. Get back into the business of dumping stinkers into the direct-to-video market. Don't throw good money after bad.
  • Local television news is a dying business. It's still profitable for most stations, but the audience is dying off, as younger viewers switch to "The Daily Show", watch something on their DVRs at 11 p.m. or get their news from the Internet. U.S. television stations used to be required to show news programs by the Federal Communication Commission, in order to fulfill their obligation to operate "in the public interest." Today, fulfilling the public interest requirement has become little more than a joke; many stations do it with a few hours of children's cartoons on Saturday morning, between the early-morning infomercials and the early-afternoon sporting events.

    Rather than run news programs that have turned into a nightly cavalcade of murders, fires and crying mothers, where most of the audience is only interested in the weather report or sports, why not try something else? At the 11 p.m. hour, instead of producing a news report, how about replacing it with a local nightly talk show, perhaps hosted by a local radio talk show host with a strong following? Bring in a studio audience and talk about some of the local events of the day. Let viewers call in. You can still have weather and sports segments (or the entire show could be focused on sports.) Instead of being an also-ran for the "between 60 and dead" audience for local news, you could build a younger, more valuable audience for a locally-produced show.
  • Radio stations in the U.S. have been adding commercials to their broadcasts for years. It sometimes sounds like there's more commercials than content. That's led to the growth of satellite radio and streaming services like Pandora and Spotify, which have fewer ads or none at all. Broadcasters are wringing their hands about the growth of streaming media in particular; Clear Channel, which owns more radio stations than any other company in the U.S., has launched its own streaming service called iHeartRadio. But, why are consumers shifting away from broadcast radio? The answer is too many commercials. Yet, there's a radio in just about every car sold in the U.S., and it doesn't cost anything to turn it on or to keep using it.

    Instead of trying to push as many ads into each hour as you can, why not cut back on the number of commercials played each hour, and promote that as a differentiating factor? In the short term, your ad revenues will drop, but if your audience expands, you can charge more.
The Zero Option means making a deliberate decision to change something that you're doing, even if it appears to be working. It doesn't mean making a change because you're forced to by new technologies, competitors or governments--those aren't proactive, voluntary changes. It's about actively driving the rate of change in your industry instead of letting others drive it for you. 
Enhanced by Zemanta

Wednesday, November 07, 2012

Obama's victory spells bad news for Apple, Macmillan and Pearson

Historically, Republican administrations have been far less aggressive in prosecuting antitrust cases than Democratic ones. Had Mitt Romney won last night, it's likely that the eBook price-fixing case against Apple, Macmillan and Pearson would have been settled on terms far more favorable to the companies, or would have been dropped altogether. That may have been one reason why the three companies refused to settle with the U.S. Justice Department--they thought that they could get far better terms by waiting a few months for a Romney administration. However, with President Obama's reelection, the Justice Department will continue to pursue its case, and will almost certainly make settlement on the same or similar terms as Hachette, HarperCollins and Simon & Schuster a condition for approval of the Penguin-Random House merger. All of this makes a settlement by Macmillan much more likely. At that point, Apple won't matter, because all of the Big 6 will be prohibited from accepting Apple's terms.
Enhanced by Zemanta

Saturday, November 03, 2012

Penguin Random House: The Aftermath

Earlier this week, Pearson and Bertelsmann confirmed that they intend to merge Penguin and all of Random House except for its German-language business into a new joint venture, to be named Penguin Random House. (No Random Penguin or Penguin House for us.) Shortly before the deal was announced, word leaked out that News Corp. was considering making an offer to acquire Penguin, but the terms of the Pearson-Bertelsmann deal mean that Pearson can't consider any other offer.

I believe that, three to five years from now, the publishing industry will look much like the recording industry does today, with the Big 6 becoming the Big 3. In fact, it was Bertelsmann's experience in its joint venture with Sony Music that's said by some to be the reason that the company insisted on having a majority interest in its joint venture with Pearson. Its joint venture with Sony was 50:50, and differences in objectives and strategies between the two companies eventually led Bertelsmann to sell its recorded music business to Sony.

If the publishing industry looks like the recording business in a few years, here's a preview of the likely winners and losers:
  • Publisher employees: The biggest reason for publisher consolidation is cost reduction. Penguin and Random House, and other consolidating publishers after them, will get rid of redundant distribution facilities and most of the people who work in them. In addition, they'll consolidate cross-imprint functions, such as sales, marketing, copy editing, production and design. That will put a lot of talented professionals on the street, and with fewer big publishers, there will be fewer places for them to look for work.
  • Authors: Despite what Penguin and Random House have said, it's inevitable that they, and other consolidating publishers, will reorganize their imprints. Some imprints will be discontinued, and their authors will be moved to other imprints or dropped. The same thing will happen to the editors at the imprints--many will be laid off.

    Author acquisition will be dramatically affected. The Big 3 recording companies have all but discontinued their formal A&R (Artists & Repertoire) operations that sent people into the boondocks in order to find new artists. Their equivalents in publishing are acquisitions editors, and many of them will find themselves without jobs. The big publishers will increasingly focus on successful self-publishers as their "farm teams", and will pay big money to poach bestselling authors from each other. For their part, bestselling authors will have less loyalty to publishers, because many of their editors will be gone.
  • Retailers: Some industry pundits have speculated that consolidation of the top publishers would give them more clout with retailers such as Amazon and Barnes & Noble. If the recording business is any indicator, they're wrong. Just as with books, the music retailing business consolidated, and highly influential retailers such as Tower Records, Musicland, Wherehouse and Virgin Music are gone (Virgin has closed its U.S. stores but still operates in other countries.) Music retailing in the U.S. is dominated by Apple, and the consolidation of the Big 6 recording companies into the Big 3 has given the surviving record companies little or no additional leverage with Apple or Walmart.

    It's unlikely that mergers between the Big 6 publishers will give them any more negotiating power with Amazon, Apple, Barnes & Noble or Kobo. The publishers will continue to depend on the retailers for the vast majority of their revenue, and the U.S. Justice Department will be watching over their shoulders in order to prevent more shenanigans like organized price-fixing.
  • Independent Publishers: Independents will actually be helped by publisher consolidation, for several reasons. First, many talented publishing professionals who ordinarily wouldn't have considered working for smaller publishers, or working as freelancers, will become available to independents. Second, some of those professionals will set up their own independent publishing companies. Third, the authors that are shed from the rosters of the consolidating publishers will become available to the independents. Fourth, authors who might have been discovered and developed by the top publishers will instead go to independents. Fifth, with fewer titles coming from the big publishers, retailers will have more shelf space (real or virtual) to devote to independents.
  • Self-Publishers: The big publishers will increasingly recruit successful self-publishers to fill their rosters and compensate for the loss of acquisitions editors. The success of the 50 Shades trilogy has eliminated any remaining stigma from self-publishing authors. Big publishers now know that success as a self-publisher is a very strong indicator of marketability--and it eliminates the cost of spending years to develop a promising author.
  • Agents: Consolidation of the Big 6 will spell problems for literary agents. They'll have fewer authors on the rosters of the top publishers, and thus, fewer opportunities to earn commissions from big advances and royalty payments. They'll have to devote more of their time to independent publishers, which generally pay lower advances and generate lower royalties for their clients. And, they'll have to compete with other agents to represent successful self-publishers, meaning that they'll have to accept lower commissions.
  • Consultants: Publishing consultants who have spent their entire careers in the publishing industry are going to find it hard to adjust to publisher consolidation. Consultants with contracts with two publishers that consolidate into one will have one of their two contracts cancelled, and the surviving contract will be closely scrutinized. (I saw this happen first-hand as the IPTV industry went through massive consolidation starting in 2008.) Publishing consultants will have to shift their focus to independent publishers, which have much smaller budgets than the Big 6.
In short, independent publishers are about the only group that will be a clear winner from publisher consolidation, followed by successful self-publishers. Everyone else will end up either neutral or a loser as a result of consolidation. 
Enhanced by Zemanta

Friday, November 02, 2012

If you're upgrading to Windows 8, read this

I bit the bullet last weekend and retired my six-year-old notebook PC running Windows XP. To replace it, I switched to an under-used Samsung notebook that I'd installed the consumer preview version of Windows 8 on earlier in the year. Unfortunately, the only option I had was to do a clean install of the released version of Windows 8 Pro. I reinstalled the software I need and transferred my data files from my XP system, but I found that writing emails and documents on the new computer was an exercise in frustration. My cursor bounced around the screen, randomly jumping back into the text, and highlighting blocks of text that were deleted as I typed.

I did some research online and learned that the problem I had wasn't uncommon, starting with Windows 7. When I did the clean install of Windows 8, my vendor-specific drivers were replaced with generic Windows drivers, and my trackpad driver was replaced with Microsoft's old PS/2 mouse driver. I downloaded and installed the latest drivers for my PC from Samsung's website, and all of the problems I had were fixed. So, after you upgrade to Windows 8, especially if you did a clean install, go to your PC vendor's website and install the current drivers. Signed Windows 7 drivers seem to work fine under Windows 8. Don't assume that Microsoft will install the latest, or even the correct, drivers for you.
Enhanced by Zemanta

Friday, October 26, 2012

Part 5: Everyone is becoming a “hyphenate”


The silos between creative professionals are breaking down, just as the silos between types of media are breaking down. In established media, everyone has their own, well-defined role: Writers, editors, designers, artists, musicians, composers, singers, producers, directors, actors, etc. Each role is further defined by media, so, for example, there are writers for books, plays, movies and television. Historically, there have been “hyphenates”—people who perform multiple roles, such as the singer/songwriter or the writer/director, but they’ve been fairly rare.

Today, hyphenates are quickly becoming the rule rather than the exception, at least for Internet-based media. It’s not uncommon to find one person performing multiple roles. It saves time and money, and gives them more creative control. One person can be a composer, musician, producer and audio engineer. Another can be a screenwriter, director, producer and actor. Yet another can be an author, editor and book designer.

At the same time that creators are doing more, publishers should be doing less. The typical model for book publishers, especially those doing non-fiction, is to find writers, assign them subjects, provide editorial direction, do copy editing and fact checking, design the books' covers and layouts, do the typesetting, put together marketing plans, and sell the books to retailers and distributors. Most publishers don't start farming out work until it's time to actually print and bind books, or convert book files into retailers' eBook formats.

The job of publishers in the future is going to be facilitating, not performing, the work of creators. Publishers will become a member of the creative team instead of the driving force—part angel investor, part project manager and part marketer. The publisher’s underlying goal will continue to be to make money, because that’s how profits can be plowed back into underwriting more creation. However, they’ll do that by supporting their creators, not making creative decisions for them.

And that brings us to the end of this series. Here's a summary:
  1. The role of publishers is being transformed by the Internet, mobile devices and wireless broadband.
  2. Publishers are in the business of entertainment, information or education, not creating and selling print books and eBooks.
  3. Being successful as a 21st Century publisher requires going “all in” on all types of media.
  4. As a practical matter, there are no more financial or technical barriers to entry.
  5. Everyone is performing tasks that used to be done by multiple creators, and publishers are becoming facilitators and supporters of creative teams.
Enhanced by Zemanta

Part 4: The end of barriers to entry and financial limitations


Publishers can afford to be in all kinds of media because the capital investment required has dropped dramatically. Almost everything that a publisher needs to do in order to create and distribute content can be done with a personal computer. Adobe will rent you Creative Suite 6, with all the software needed to create almost any kind of media, including books, audio, video and apps, for $50/month per person. You no longer need to invest millions of dollars in networks and distribution equipment in order to do live worldwide broadcasting—all you need is an Internet connection and an account with YouTube, Livestream or Ustream. With the right people, a $1,000 DSLR can create video comparable to a $65,000 camera, and a $2,000 camcorder has the features and quality necessary for broadcast television. Every significant device used for creating media—PCs, tablets, smartphones, cameras, camcorders, audio recorders and much more—includes (or is based on) a microprocessor that’s subject to Moore’s Law. Costs go down and capabilities go up like clockwork.

It’s widely believed that the reason that media became “mass media” was due to its dependence on advertising—advertisers wanted to reach the widest possible audiences. That’s partially true, but the original reason was that the capital investment needed for newspapers, magazines, radio, television and motion pictures was so high that media companies had to pursue big audiences. Today, however, there’s no reason to pursue mass audiences—capital requirements are low, technology is readily available at the consumer level, and infrastructure that media companies once had to own themselves is now available in the cloud, on contract or on a project basis.

There are no more technical or economic barriers to entry. The biggest remaining barrier is the attitudes of the people making decisions at publishers, and those of the people advising them. If you believe that your business is making and selling bound stacks of printed paper, and you’ve believed that for your entire career, it’s almost impossible to accept that you’re really in the entertainment, or information, or education business. It’s very easy to confuse the tools with the products that the tools create, but it’s the products, not the tools, that matter.

Patton Oswalt delivered a brilliant keynote at the 2012 Montreal Just for Laughs Festival in the form of two letters, one to his fellow comedians, and one to the “gatekeepers in broadcast and cable executive offices, focus groups, record labels, development departments, agencies and management companies.” Here’s a quote from his letter to the “gatekeepers”:

“In my hand right now I’m holding more filmmaking technology than Orson Welles had when he filmed Citizen Kane. I’m holding almost the same amount of cinematography, post-editing, sound editing, and broadcast capabilities as you have at your TV network. In a couple of years it’s going to be equal. I see what’s coming. This isn’t a threat, this is an offer. We like to create. We’re the ones who love to make stuff all the time. You’re the ones who like to discover it, patronize it, support it, nurture it and broadcast it. Just get out of our way when we do it.”

Oswalt is being overly generous when he describes what the “gatekeepers” like. The only thing they truly “like” is to make money. Everything else they do is necessary in order to obtain the talent and content that they need in order to make money. To them, the content that they create is a “unit of production,” just as cars are to Ford or boxes of cereal are to Kellogg’s. “Patronizing, supporting and nurturing” are nothing more than product development—the process necessary to get any creative idea from an outline on paper to a finished work that can be sold in order to make money.
Enhanced by Zemanta

Wednesday, October 24, 2012

Part 3: No more silos


In Part 2 of this series, I proposed a new definition for publishers. Nothing within the definition of the publisher's role requires, or even presupposes, printed books or eBooks. It can include websites, web apps, native apps, databases, videos and podcasts—as well as print and eBooks. However, today's publishers are missing a lot of the experience and skill sets that are necessary to create this kind of content—and to get it, some publishers are engaging in marriages of convenience. For example, Random House recently launched an operation called Random House TV—but rather than partnering with a producer with extensive dramatic television experience, it partnered with Fremantle Media, a company owned by its parent, Bertelsmann, that’s best known for reality and game shows.

Being successful as a 21st Century publisher requires going “all in” on all types of media—nothing can be “out of your wheelhouse.” The silos used to be easy to define: Your newspaper was delivered to your house each day by a paperboy on a bicycle. The magazines to which you subscribed arrived in your mailbox. You listened to the radio using one box and watched television using another. You bought books at the local bookstore or borrowed them from the local library. Today, everything arrives the same way (over a high-speed Internet connection or wireless broadband) to the same box (your tablet, smartphone or PC) wherever you happen to be located.

Just because you can’t limit yourself to any one silo anymore, it doesn’t mean that you have to have all of the necessary expertise in-house—in fact, there’s never been a better time to use outside talent. However, as with the Random House example above, it's not enough to work with people who have generic experience with a medium. Instead, it’s critical to partner with the right people, with the right varieties of experience and talent.
Enhanced by Zemanta

Tuesday, October 23, 2012

Is Apple becoming a victim of its "reality distortion field?"

Earlier today, Apple announced a new 13" MacBook Pro, refreshed its Mac mini and iMac product lines, and introduced not one, but two new iPads: The 4th Generation 10" iPad, and the long-rumored iPad mini. The biggest news came from the iPads, and the financial markets are already reacting negatively.

The iPad mini has a 7.9" display and an A5 processor, but other than that, it's essentially an iPad 2 in a smaller package. Apple took pains to compare the iPad mini to Google's Nexus 7; Apple said that its tablet's display has 35% more area, and that it offers a "tablet experience" while the Nexus 7 offers a "scaled-up phone experience." Apple left a few things out of its comparison, however: The Nexus 7's display has higher resolution than the iPad mini, 1280 x 800 vs. 1024 x 768, which should be noticeable on the iPad mini's larger display. The Nexus 7's quad-core Tegra 3 processor should be faster than the iPad mini's Apple A5, and most importantly, the 16GB Nexus 7 sells for $199 (U.S.), while the 16GB iPad mini is priced at $329.

The Nexus 7 isn't Apple's only problem: Amazon's 16GB Kindle Fire HD also sells for $199, while Barnes & Noble's 16GB Nook HD is $229. In fact, Amazon's 32GB model is only $249 vs. $429 for the 32GB iPad mini, and the 32GB Nexus 7 is expected to be announced by Google next week for $249. Price isn't everything, of course, but it's very important, especially for 7" tablets. Apple's pricing raises an interesting question: Do people buy 7" tablets because they're seven inches, or because they're cheap? If they buy because they're seven inches, then Apple's in great shape, but if they buy because they're cheap, the iPad mini could spell trouble for the company in two ways.

The first issue is that price-conscious customers need to justify spending $130, almost twice the price, for a tablet with the Apple name on it. I'm far from convinced that the iPad mini is worth the premium that Apple is asking, and the financial community, which expected the 16GB model to be priced at $299, isn't convinced, either. Apple could justify a $100 premium and keep the iPad mini under the psychological $300 threshold with a $299 price. An additional $30 doesn't make a huge difference, but pricing the entry-level model over $300 does. The second issue is that a lot of consumers will look at the iPad mini and realize that it's all they need, for $170 less than a comparable 4th Gen iPad. That will result in the iPad mini cannibalizing sales of the more expensive and more profitable 10" iPad.

And, what about that new 4th Generation iPad? It replaces the 3rd Generation model, which had only been on the market for six months. It has a faster A6X processor, which Apple claims has twice the CPU and graphics speed of the A5 processor in the 3rd Generation model, and it also has a Lightning connector. Other than those two changes, the 3rd and 4th Generation models are essentially identical, which is leading some pundits to call the new model the "iPad 3S." The 4th Generation model, although not a complete surprise, will throw some uncertainty into consumers' future iPad purchase decisions. There's fairly reliably been a year between new iPad and iPhone models, but now we've gotten a new iPad just six months after the release of the previous model. Does this mean that Apple is speeding up its product replacement cycle, or is this a one-time event to get the Lightning connector into wider use?

I suspect that Apple may be believing some of its own hype--it can demand a premium price for the iPad mini just because it's an iPad. However, the 7" segment is by far the most competitive segment in the tablet market, and Apple is a late entrant. Apple will sell lots of iPad minis, but I doubt that they'll sell as many as they or analysts expect
Enhanced by Zemanta

Part 2: What business are you in?


In 1960, Harvard Business School professor Theodore Levitt wrote a landmark article for the Harvard Business Review titled "Marketing Myopia." Levitt focused on industries that were struggling at the time--among those that he used as examples were railroads and motion pictures. Levitt wrote that in both these cases, management forgot what businesses they were really in. Railroads thought they were in the railroad business when they were actually in the transportation business. As a result, they allowed competitors (trucking firms, package delivery services and air cargo companies) to take away huge portions of their revenue and leave them with the niche of slowly delivering huge quantities of materials.

Movie studios thought they were in the movie business, not the entertainment business. As a result, their management first dismissed television, then denied its impact, and then refused to make their movies available for broadcasting. It was only after most of the movie studios came close to or entered bankruptcy that they realized that they had to do business with television networks and stations if they hoped to survive.

When you're in a business for several decades, it becomes natural to think "inside the box." The barriers to entry (cost, technology, experience, customer habits, etc.) are simply too great for new entrants to overcome. Rather than redefining your business, you focus on doing what you already do less expensively. Customers have purchased your goods or services for as long as you've been in business, so whatever you're doing is working, and you should keep doing the same things. That mindset makes legacy industries vulnerable to disruptive innovators: Trucks replaced railroads, and television replaced going to the movies.

The lessons from fifty years ago are still being learned today, and nowhere more than in the book industry. Book publishers aren’t in the business that most of them think they’re in. If you talk to publishers, or for that matter, booksellers, most of them will tell you that they’re in the business of selling stacks of nicely bound paper printed with well written and edited text, and their digital simulacra, eBooks. In reality, they’re in one of three businesses: Entertainment, information or education.

Once the focus changes from manufacturing and selling books to performing a job for your customers, the definition of what a publisher does changes radically:

  1. Deliver entertainment, information and education
  2. Quickly and cheaply
  3. To PCs and mobile devices as well as to legacy media
  4. Via the Internet and wireless broadband connections, as well as legacy channels of distribution

Enhanced by Zemanta

Monday, October 22, 2012

Part 1: Birth, Death and Transformation


We’re in the midst of a massive shift in media consumption patterns. People consume more news than they ever did, but they don’t read newspapers anymore. Magazines, even on tablets, are slowly dying. And, as for books, The New Yorker published an article titled “Twilight of the Books”…on December 24, 2007, before eBooks were even a significant part of the business. Statistics in the article show that the market for books has been declining for at least 30 years. U.S. movie theater ticket sales peaked in the 1950s; the only things that have kept the industry going have been home video sales and higher ticket prices. But, home video sales are also dropping—they’re being replaced by rentals from Redbox, and online streaming from Netflix, Amazon and others.

Let’s be clear: Movie attendance has been declining for half a century, but no one seriously expects the movie business to disappear. The same is true for books; readership will continue to decline, but it’s hard to visualize a world without books, even if most of the remaining books are digital instead of paper. Nevertheless, the balance has shifted. Consumers want their media faster and cheaper. Readers want their news from the Internet, as it happens (if not sooner, leaked out via Twitter.) One can argue that attention spans have gotten shorter—look at the popularity of viral videos on YouTube—but videogames, both casual and complex, can engross players for hours or even days.

The transformation of media in the 21st Century is being driven by three forces: The Internet, mobile devices and wireless broadband. The Internet provides a conduit for every kind of content. There’s no need to ever leave your house to purchase any kind of media, and it makes possible entirely new types and combinations of media that didn’t exist prior to the rise of the World Wide Web. Mobile devices and wireless broadband make that content available anywhere, anytime, and open the digital world to hundreds of millions of people who can’t afford personal computers or high-speed Internet connections.

Enhanced by Zemanta

Sunday, October 21, 2012

Publishing--From Evolution to Revolution

Over the next five days, I'm going to be writing about how publishing is changing--and how it's likely to change much more in the next few years. Here's a rundown of the sections:
  1. Monday: Birth, Death and Transformation
  2. Tuesday: What Business Are You In?
  3. Wednesday: No More Silos
  4. Thursday: The End of Barriers to Entry and Financial Limitations
  5. Friday: Everyone is Becoming a “Hyphenate”

Tuesday, October 16, 2012

If you buy a Microsoft Surface, it probably won't be because of the price

Earlier today, Microsoft announced the prices for its Surface for Windows RT tablets. The entry-level Surface for Windows RT tablet comes with 32GB of storage and sells for $499 (U.S.); the same model with a black Touch Cover keyboard sells for $599. The 64GB model bundled with a black Touch Cover sells for $699. If your tastes run to a more colorful Touch Cover, those are available separately for $119; if you prefer a more conventional keyboard design, the Type Cover is also available in black only, for $129.

Several months ago, there were some rumors that Microsoft would try to underprice Apple with an entry-level Surface tablet priced as low as $199; those rumors were disproved today. Microsoft has taken pains to point out that it's pricing its 32GB model where Apple prices the 16GB third-generation iPad, and the 64GB bundle is priced the same as Apple's 32GB model without a keyboard. Microsoft's prices are very competitive, but Windows RT will only have a tiny fraction of the apps available for iOS or Android when it and the Surface tablets are released next week.

Microsoft seems to be at a loss to describe exactly what the Surface is--according to Windows business unit president Steve Sinofsky, it's neither a tablet nor a notebook computer. Microsoft's new television ads don't help--they show people dancing around with Surface tablets as they connect and disconnect keyboards, but they don't actually show anyone doing anything useful with the devices. That's the trap that tablets like the Motorola Xoom and BlackBerry Playbook fell into--Motorola and RIM showed their tablets playing videos and games, but not doing anything useful.

There's not going to be a lot that consumers will be able to do with Surface tablets when they first ship, at least in comparison to iPads and Android tablets. It will take time for developers to build up a competitive catalog of apps, and developers won't bother until they see Windows RT gaining market momentum. By themselves, Microsoft's prices will do little to stimulate sales.

In addition, I believe that we're going into the Christmas of 7" tablets: Apple said that it plans to make an announcement, most likely of a 7" iPad (among other products,) on October 23rd. The focus this holiday season will be on tablets selling for $199 to $299, not $500 or up. Consumers will be comparing the small iPad to the big iPad, or the small iPad to Amazon's Kindle Fire HD, Barnes & Noble's Nook HD and Google's Nexus 7. They're unlikely to be comparing anything to the Surface for Windows RT.

Microsoft may believe that Android, on tablets at least, is highly vulnerable to being displaced. Under this scenario, Microsoft's goal would be to make Windows RT the credible alternative to iOS, and then wait for Apple to make a serious mistake, just as the Xbox 360 capitalized on Sony's mistakes with the PlayStation 3 to become the video game console market leader. (It's the "I don't have to outrun the bear, I only have to outrun you" idea.) If that's Microsoft's approach, all they have to do is beat Android, not take away a significant number of iPad sales. If Microsoft fails, however, people will be comparing the Surface not to the Xbox, but rather, to the Zune.
Enhanced by Zemanta

Sunday, September 30, 2012

What determines whether a Kickstarter project will succeed or fail?

A paper published in July (PDF link) by Wharton professor Ethan Mollock tried to identify the elements that determine whether or not a Kickstarter project will be successfully funded. Professor Mollock and his assistant Jeanne Pi compiled information on 24,503 projects that were fully funded, 26,483 that failed, 4.073 that were still raising funds at the time that the research took place, and just over 100 cancelled projects. After removing projects with very small (under $100) and very large (over $1 million) goals, and foreign-based projects, the researchers were left with 46.902 projects representing $198 million in pledges.

Here's a summary of the team's findings (note that I use the term "average" instead of "mean"):
  • 47.9% of the projects studied were fully funded.
  • Projects tend to either fail by a large amount or succeed by a small amount:
    • 87% of the projects that failed raised less than 30% of their goal. Only 10% of projects that failed raised even 30% of their goal, and only 3% raised 50% of their goal.
    • 25% of projects that did get funded were 3% or less over their goal, and 50% were about 10% over their goal. Only about 11% reached double their goal. The remaining 4% achieved more than double their goal.
  • The average level of funding for all projects was 10.3% of the goal.
  • The average amount raised by an unsuccessful project was $900, and the average raised by successful projects was $7,825.
  • No very small projects (goals of $100 or less) or very large projects (goals of $1 million or more) were funded.
  • The maximum project duration has been shortened by Kickstarter from 90 to 60 days, but 30-day projects were a bit more likely to be fully funded than 60 days (35% vs 29%.)
  • Perceived project quality, defined by the researchers as having a video, was very important in determining whether or not a project reached its goal. Projects with a video had a 37% chance of success, while those without videos had a 15% chance of success.
  • Featured projects were far more likely to be successful than those that weren’t featured (89% success for featured projects vs. 30% for unfeatured projects.)
  • There’s a direct correlation between the number of Facebook friends that the project founder has and the chance of success: A founder with 10 Facebook friends had a 9% chance of success; 100 friends gave a 20% chance of success, and 1,000 friends gave a 40% chance of success.
  • Projects based in cities with a high percentage of workers in creative professions had a greater chance of success than those in cities with a low percentage of creative professionals.
  • Only 5% of fully funded projects failed to deliver their intended goods or services, but there were usually substantial delays in delivery. Of the projects that the researchers measured that delivered products, the average delay was 1.28 months.
  • Only 24.9% of the projects delivered on time, and 33% had yet to deliver as of the end of the study.
  • The more complex the project, the greater the delay in delivery. The more that the project exceeded its goal, the greater the delay in delivery.
  • Project category has a direct effect on project success: Based on four different models, video projects have the greatest probability of success, followed by dance and then theater. Design, film & video, music, comics and food follow in a fairly tight cluster. Publishing projects have the lowest probability of success.
Based on the researchers' findings, there are some practical suggestions for people considering Kickstarter campaigns:
  • Don't go for a very small goal, hoping that it will make it easier to get funded. The average amount that funded projects raised was almost $8.000.
  • Go for a 30- to 45-day project duration rather than 60 days.
  • The more Facebook friends (and, by extension, other social media contacts) that you can promote your project to, the better.
  • Projects that were featured by Kickstarter had by far the best chance of success--89% of featured projects were funded, vs. 30% that weren't featured. The Wharton team didn't look at which attributes make it more likely that a project will be featured.
  • If your project doesn't get featured, the more publicity that you can get from sources outside Kickstarter, the better.
  • Having a video to promote your project will more than double its chances of getting funded. However, the study didn't look at whether the quality of the video has an impact on funding success.


Enhanced by Zemanta

Thursday, September 20, 2012

Amazon & Apple: Is their proxy war getting hotter?

Earlier today, according to ReutersWalmart notified its store managers that the company would no longer carry Amazon's Kindle eReaders and tablets once its existing inventory and committed purchases run out. Walmart confirmed its decision with Reuters, but didn't specify the reason(s). In May, Target announced that it would no longer carry Kindles, and like Walmart, it never made an official public statement about the reason. However, CNN noted that Target had just been authorized by Apple to begin selling iPads, and that it planned to add Apple "mini-stores" within 25 of its locations.

There are two reasons being cited by observers as to why Walmart might have decided to drop Kindles:
  1. Amazon may not have offered Walmart a sufficient discount, or
  2. Walmart may see Amazon as an increasingly large competitor for general merchandise sales, and doesn't want to support a competitor any longer.
Both of these reasons make sense, and either one of them may be true, but let's sideline that discussion for a bit.

At Publishers Lunch Deluxe, Michael Cader reported on Apple's efforts to get evidence from Amazon for its defense in the government's eBook price-fixing case. According to Cader, Apple has been trying to compel the Justice Department to turn over the transcripts of interviews with 14 Amazon managers and executives. Those interviews weren't taken under oath. The Justice Department argued that the interviews are protected work product, and aren't subject to release. However, Justice has given Apple the names of everyone at Amazon who was interviewed, and said that Apple could take depositions directly from those people. In addition, the Justice Department has released all of its email communications with Amazon and all of the documents and data it received from Amazon during its investigation.

Apparently, Apple took up the Justice Department on its idea, and filed subpoenas to force the 14 Amazon employees to give depositions. Then, last Friday, September 14th, Amazon filed a motion in Seattle Federal court to quash the subpoenas, on the grounds that Amazon isn't a party to the litigation. This week, Apple filed a motion with Judge Denise Cote, who's in charge of all of the U.S. cases related to eBook price-fixing, to move Amazon's motion from Seattle to her court. Judge Cote is now considering Apple's motion.

I don't know that much about the law regarding who can and can't be compelled to provide depositions and discovery documents. What I do know is that if Apple does eventually get the right to enforce its subpoenas, Amazon is going to want to put strict limits in place to prevent any confidential information that's not directly related to the price-fixing case from being revealed to Apple.

That brings me back to the title of this post, and to my first point. Clearly, Apple and Amazon are competing in more areas, and in the U.S., Amazon is currently the only serious competitor to Apple in tablets, based on sales. Apple is widely rumored to be planning to announce a smaller iPad next month. Target dropped Amazon shortly after signing a deal to carry Apple's iPads, and now, a month before the smaller iPad's expected release, Walmart has also dropped Amazon's Kindles. Does that mean that Apple might have made Walmart's getting the small iPad conditional on dropping Amazon? It's certainly possible, but rather than getting into legally murky waters, Apple could have required Walmart to give its products a certain amount and type of display space--a very common condition in retail distribution deals. Walmart would have to get that space from somewhere, and "independently decided" (wink, wink, nudge, nudge) to take it from Amazon. Another perfectly legal option would be if Apple offered Walmart co-op funds if it did certain things (for example, PC manufacturers get reimbursed for part of their advertising costs by Intel if they include that four-note musical theme at the end of their commercials.) These payments amount to a discount--and if Target is already getting them, Walmart would be at a competitive disadvantage if it didn't get them as well.

All of this adds up to "shadows on the wall" suggesting a proxy war between Apple and Amazon:
  • Amazon is using the Justice Department as a proxy against Apple to get agency terms and Most Favored Nation clauses terminated, and
  • Apple is using Target and Walmart as proxies to hinder Amazon's ability to sell Kindles in stores.
If this "proxy war" model is correct, I'd expect Best Buy to be the next retailer to drop Kindles. Apple has dedicated sales space in most Best Buy stores, and a lot of leverage over the retailer. In addition, Amazon is a strong competitor to Best Buy, so there's plenty of reasons for Best Buy to stop selling Kindles.

You may say that this is all paranoia, and you may be right, but I've spent enough time in high tech to know that everything that's happened so far is right out of the Silicon Valley playbook. 


Enhanced by Zemanta

Monday, September 17, 2012

After more leaks than a sieve, the Panasonic GH3 is revealed

Perhaps it was the multiple leaks of specifications to photo websites, or Panasonic itself posting a promotional video on YouTube and then taking it down, or yesterday, Samy's Photo posting specifications and pictures, but today's announcement of Panasonic's new GH3 seems like an anti-climax. It shouldn't, since Digital Photography Review writes that only the Canon 5D Mark III has a higher 2K video bitrate than the the GH3, and the Canon DSLR is priced more than $2,000 (U.S.) higher. The GH2 was a firmware hacker's dream, with the video bitrate taken all the way to 176Mbps in AVCHD with All I-frames, while the maximum video ISO was increased from 3200 to 12,800. The problem was that the some GH2s became unstable when run at this insane bitrate (most users chose to use a more reasonable 44Mbps, which is still much faster than the maximum 28Mbps of AVCHD 2.0 at 1080p60.)

Panasonic has taken the hackers' improvements to heart, and has implemented a maximum bitrate of 50Mbps in 1080p60, or 72 to 80Mbps in All-I Frame at 1080/24p or 30p, both using H.264 compression. (All the frame and bit rates of AVCHD 2.0 are also supported.) Its maximum ISO, in both still and video mode, is now 12,800. This gives the GH3 virtually the same performance as the GH2 with hacked firmware, without requiring hacking or voiding the camera's warranty. In order to provide better performance while maintaining the camera's reliability and stability, the GH3 has a new three-core Venus 7 CPU.

The GH3 also supports timecode in H.264 and AVCHD modes, and it has a headphone jack for audio monitoring, in addition to a microphone jack and manual control over audio levels. The HDMI out can be configured with overlays on or off, so it can be used for monitoring and with an external recorder. (It's not clear whether the GH2's HDMI quirks, which made it unusable in many cases with external recorders, have been fixed in the GH3.)

The GH3 is no slouch as a still camera, either:
  • 16 Megapixel sensor
  • 1.7 million dot OLED viewfinder
  • 614K dot 3" OLED touchscreen display
  • Autofocus speed of .07 seconds
  • 6 fps maximum continuous frame rate
  • Memory card slot for SD, SDHC and SDXC cards
  • A fully sealed magnesium alloy frame
  • Built-in Wi-Fi
The U.S. price of the GH3, $1,300 for body only, is comparable to the price of the GH2 when it was first launched, but the GH3 is much more camera. The GH2 became the budget "go-to" DSLR-style camera for many cinematographers, even though its Micro Four-Thirds sensor is smaller than APS-C or full-frame. With its faster native bitrate, and a faster CPU that hackers may well be able to tune for even more outrageous performance, the GH3 is likely to supplant the GH2 as the bargain camera of choice for cinematographers.

Enhanced by Zemanta

Sunday, September 16, 2012

Surviving first contact...with your customers

I'm watching Sunday afternoon U.S. football on television, and there's no football without commercials. In the case of the game I'm watching, the commercials are primarily for beer and the television network's new shows. The purpose of the commercials is to get you to buy the products or watch the shows. Endless effort and enormous expense goes into the commercials--who they're targeted to, what their messages are, and what they look and sound like.

The problem is that no commercial, no matter how good, will help your products and services to survive first contact--the point at which the customer actually buys the product, uses the service or watches the show. Movie studios have learned that the hard way. Heavy advertising and promotion increase the odds of getting a good opening day--at which point advertising ceases to be effective, and word of mouth takes over. If viewers love the movie, they'll text message their friends and tweet about it, and ticket sales will increase over the weekend. On the other hand, if they hate it, they'll send out text messages and tweets, ticket sales will go down over the weekend, and then collapse the following week.

The same thing happens every day in every product category. Advertising can stimulate the first purchase or the first viewing, but it can't get people who didn't like it after they tried it to buy more, nor can it stop them from telling their friends, acquaintances and followers. That's why your focus should be on the buyer's first experience with your product or service, not the effort to get them to buy it in the first place.

That may sound dangerously like the "build a better mousetrap" argument, but it's not. I'm not saying that you don't have to do promotion, but rather, the far more important thing is your customer's first experience with your product or service. The reason is that word of mouth is becoming the driving force for future sales of just about everything, and unhappy current customers equal fewer future customers. If you've got a restaurant, your food and service had better be good. If you're selling cars, the cars that your customers buy had better be reliable, and when they need service, the service had better be good. If you're trying to get people to watch your new television show, that first episode had better be great.

Advertising and promotion won't help you to survive first contact with new customers, and it won't save you from bad word of mouth. All it will do is increase the chance that you'll make that first sale. The rest is up to you.
Enhanced by Zemanta

Friday, September 14, 2012

What happens if Apple's announcements are no longer news?

This was the week of Apple's big iPhone 5 reveal, and it was like many of Apple's press announcements: A packed Yerba Buena Center; Apple executives describing product features using superlatives usually reserved for...well, for Apple product launches; and the usual product videos, including designer Jony Ive talking about how incredible his latest design is. There was also the endless parade of television news trucks lining the streets around Moscone Center, and the ever-increasing number of liveblogs covering the events AS! THEY! HAPPENED! What there wasn't was much actual news, and that could be a problem for future Apple product launches.

As the Columbia Journalism Review pointed out, virtually every detail of the iPhone 5 had been leaked before the event. The iPhone checked all the boxes on the "must have" feature list--bigger screen, faster processor, better camera and LTE--but there wasn't anything groundbreaking about its design or functionality. If you didn't know that the iPhone 5's bigger screen can accommodate an additional row of icons, it would be hard to tell the iPhone 5 apart from the 4 or 4S at a glance. (Update, September 23, 2012: The iPhone 5 is actually fairly easy to tell apart from the 4 and 4S, even when it's not turned on. Apple has done away with all the chrome trim on the phone, and the back is metal, not glass.)

In addition, the presentation was long. There was everything you'd expect in an iPhone rollout, followed by everything you'd expect in an iPod rollout. I suspect that Apple tied the two announcements together in order to get more attention for the new iPods, but if the company is actually planning to launch a smaller iPad next month, it probably wouldn't have hurt anything to announce the iPods at that event.

The CJR picked up on some of the liveblogs' sense of disappointment: They noted that Engadget's coverage reached parody levels, with 78 exclamation points in 122 minutes. The New York Times' coverage was deemed sober, although assigning four reporters to the story was overkill. The Wall Street Journal also avoided getting over-excited.

Some observers say that Apple is most likely going down the same path with the iPhone that it followed with the iMac, MacBook Pro and MacBook Air product lines: It's optimized the physical design of the iPhone, and future changes will be more incremental than revolutionary. That makes sense and may very well be true, but you rarely see the huge press turnout and coverage for Apple's Mac product announcements that you see for the iPhone and iPad.

It's true that customers don't seem to find the iPhone 5 disappointing--it sold out of its first week's allotment in 30 minutes, and that was with pre-ordering starting at 3:01 a.m. Eastern time in the U.S. However, what matters in this case is whether the press sees a lot of news value in Apple's future announcements. If all the major news leaks before the announcements, the story is going to become what Apple was still able to keep secret, not what it announces. Given that Apple has so many production partners, keeping new products under wraps will only get harder.

Apple's had an enormous advantage over its competitors because it could count on at least $100 million in free publicity for each launch from the world's biggest media outlets, processed through news organizations in order to give it an extra level of authority. If Apple's announcements lose their newsworthiness, they'll also lose their impact. Even if Apple's management figures out how to go back to the CIA-like levels of security the company's product launches had in the past, its "reality distortion field" may be gone for good.
Enhanced by Zemanta

Wednesday, September 12, 2012

Sony's A99 full-frame DSLR is official

According to Digital Photography Review, Sony has officially announced its first full-frame DSLR in four years--the A99. Technically, the A99 isn't a DSLR, because it uses the transparent mirror technology of Sony's other Alpha cameras. Sony claims that its mirror design enables the A99 to be the lightest full-frame DSLR on the market (1.79 lb. including batteries.) Instead of an optical viewfinder, it has a 2.4 Megapixel OLED viewfinder. It's also got a 1.23 Megapixel LCD display with hinges that allow it to be tilted, swiveled and reversed (it also makes great julienne fries.) As with all Alpha cameras, it's got a Sony A lens mount. The A99 has a 24MP sensor with dual phase detection auto-focus systems. It can output 14-bit RAW images with an ISO range of 100-25,600. The A99 can shoot up to 6 frames per second in burst mode, and has a built-in GPS. Storage options are Memory Stick PRO Duo and PRO-HG Duo, and SD, SDHC and SDXC cards.

On the video side, the A99 fully implements AVCHD 2.0, with frame rates up to 1080p60 at 28Mbps and 1080i60 at 24 Mbps. It also outputs uncompressed video over its HDMI interface to an external recorder or monitor. The A99 has microphone inputs and a headphone output, and an optional stereo XLR adapter connects to the camera's intelligent hot shoe. A "silent control dial" next to the lens allows a variety of settings to be changed without bumping the camera during recording.

The A99 will be available in October at approximately $2,800 for body only; the XLR adapter will priced at $800 and will also be available in October. I can certainly understand Sony's decision not to burden the design of the A99 with XLR inputs for customers who only plan to use it for still photography, but $800 for the XLR adapter seems steep to me--that's almost a third of the price of the camera itself.

If you already own an A900 and are looking for a replacement, or you've got a collection of A-mount lenses and want to upgrade to full-frame, the A99 will be your obvious choice. For other buyers, however, side-by-side testing against comparable models from Canon and Nikon over the next few weeks and months will reveal the A99's strengths and weaknesses.
Enhanced by Zemanta

Tuesday, September 11, 2012

Wishful thinking: "Silicon Valley Will Write The Next Big Check For Original Video Content"

At TechCrunch's Disrupt Conference today, Dana Brunetti, Kevin Spacey's partner in Trigger Street, a motion picture and television production company, said that "...Silicon Valley will likely become a major funding source for original content soon. For a company like Google, after all, offering a few million dollars to produce the next episode of a show like Mad Men and to put it on YouTube is pocket change." Perhaps, but that in no way means that it would be money well spent.

For decades, Hollywood producers and movie studios have solicited investment from people outside the entertainment business. The "term of art" for this kind of investment is "stupid money." Producers and studios go from country to country, convincing government leaders that tax breaks and credits for investment in films would results in thousands of jobs, not to mention great publicity for their countries. That's why you see credits for production funds you've never heard of and production sites far from Los Angeles in the end titles of movies. Germany, South Korea, Canada and the U.K. are just some of the countries that have been tapped for production money and tax credits over the last two decades. Almost every U.S. state has offered some form of movie production tax credits or incentives at one time or another. These programs dry up as lawmakers learn that the jobs created and revenues generated don't compensate for lost tax revenues. Producers look for more stupid money elsewhere, and the cycle repeats.

Individuals who invest in movies very rarely get a positive return on their investments. Entertainment industry accounting makes integral calculus look like simple arithmetic. Once money becomes available, a seemingly limitless number of hands reach out for it. Last week, for example, director Christopher Nolan had to file suit against his current and former talent agencies so that a court could decide which ones he must pay commissions to, as well as how much and on what projects. No one in their right mind would build a movie or television production system, or rules for employment, as they work today.

The approach that YouTube has taken with its channels makes sense. YouTube originally funded each of 100 channels with up to $1 million (some channels were rumored to have received as much as $2 million.) That's enough to "move the needle," but not enough for anyone to get rich on. The funding was an advance on advertising revenues, not an unrestricted grant. As YouTube gets actual performance numbers on each channel, it's offering additional advances to some, cutting others off and identifying new candidates for funding.

I have very real doubts about Netflix's original content strategy, which has funded House of Cards, a television series produced for Netflix by Trigger Street. The network television production model calls for hundreds of scripts, which are culled down into dozens of pilots, which are further cut to become the new shows for the next television season. Even at the most successful network, the success rate is pretty low. Netflix is cutting out most of the process and is going directly to production on the basis of scripts and the people involved. Choosing on the basis of name talent is far from a sure bet--for example, look at HBO's Luck, which had Dustin Hoffman in the lead and the directing/writing team of Michael Mann and David Milch. It was a disaster, and not just because three horses died during production.

If you want to invest in a movie so that you can rub shoulders with stars or see your name in the credits, and you have some "mad money" lying around that you can afford to lose, then by all means enjoy yourself. On the other hand, if you're investing in content in order to generate revenue, you've got to be a lot more systematic and much more hard-nosed.
Enhanced by Zemanta