杭州夜生活,杭州楼凤,杭州桑拿论坛

Powered By Eshangpin!

Supreme Court ruling makes “obvious” patents harder to defend

In a decision issued today, the US Supreme Court reinvigorated the "obviousness test" used to determine whether a patent should be issued. Ruling in the case of KSR v. Teleflex, the Court found that the US Court of Appeals for the Federal Circuit, which handles patent appeals, had not been using a stringent-enough standard to determine whether a patent was infringing. HangZhou Night Net

At issue in KSR v. Teleflex is a gas pedal manufactured by KSR. The pedal has an electronic sensor that automatically adjusts its height to the height of the driver. Teleflex claimed that KSR's products infringed on a patent it held. KSR said that Teleflex's patent combining a sensor and a gas pedal was one that failed the obviousness test, and as such, should not have been granted.

Patent law appeared to be on KSR's side: 1952 legislation mandated that an invention could not be patented if a "person having ordinary skill in the art" would consider it obvious. KSR argued that the US Patent and Trademark Office should have denied Teleflex's patent, as it only combines components performing functions they were previously known to do. However, the Federal Circuit had adopted a higher standard, ruling that those challenging a patent had to show that there was a "teaching, suggestion, or motivation" tying the earlier inventions together.

KSR had plenty of support from the likes of Intel, Microsoft, Cisco, and GM, while Teleflex's supporters included GE, 3M, DuPont, and a number of other companies concerned that some of their patent holdings would be harmed should the Court side with KSR.

SCOTUS found KSR's arguments convincing, ruling that the Federal Circuit had failed to apply the obviousness test. "The results of ordinary innovation are not the subject of exclusive rights under the patent laws," Justice Anthony Kennedy wrote for the Court. "Were it otherwise, patents might stifle rather than promote the progress of useful arts."

The Supreme Court also said that the Federal Circuit's conception of a patent's obviousness was too narrow. "The Circuit first erred in holding that courts and patent examiners should look only to the problem the patentee was trying to solve," according to Justice Kennedy's opinion. "Second, the appeals court erred in assuming that a person of ordinary skill in the art attempting to solve a problem will be led only to those prior art elements designed to solve the same problem."

The end result is that Teleflex's patent has been invalidated and more importantly, the Federal Circuit will now have to pay closer attention to a patent's obviousness. That may be good news for Vonage in its appeal of a court's decision that its VoIP service infringes on three Verizon patents. Our analysis of the patents indicates that they, too, may fail the obviousness test.

More importantly, the Supreme Court ruling is good news for a patent system in dire need of fixing. New legislation introduced to Congress a couple of weeks ago is another attempt at a fix. The bill would streamline the patent appeal process while switching the US patent system from a first-to-invent to a first-to-file system. It would also cap the amount of damages that could be awarded for infringing patents.

Talk to the hand: chimps, bonobos and the development of language

Regardless of one's feelings regarding zoos, it doesn't take much time spent in the primate house to come away with a feeling of kinship to our closest living relatives. Although not human, we recognize in chimpanzees and bonobos some of the same traits we display. HangZhou Night Net

It's not an observation that escapes biologists, either. Researchers are often interested in the common behaviors and traits we share with other higher primates to give us clues as to the evolutionary origins of human intelligence. A new study published this week in PNAS from scientists at the Yerkes National Primate Research Center has looked at the use of hand gestures by chimpanzees and bonobos as a form of communication. The idea behind this study is to gain a better understanding of the roots of human language development.

Although both species of primate use vocalizations and facial expressions to communicate, they also use hand gestures. Unlike the vocalizations and facial expressions, however, hand gestures don't mean the same things to both chimpanzees and bonobos. They stem from, and are interpreted by, different parts of the brain.

The study involved looking at the different facial/vocal and manual displays from two groups of bonobos and two groups of chimpanzees. The researchers identified 31 different manual gestures, and 18 facial/vocal displays that related to a range of different behavioral activities such as grooming, feeding, playing, and so on. It turns out that the facial/vocal displays could be recognized regardless of whether the viewer belonged to the same group or even species.

But when it came to hand gestures, most interpretations were specific to individual groups; a chimpanzee from one group would not be expected to know that a certain hand signal used by group A meant "please groom me." Hand signals were also found to be context dependent: "A good example of a shared gesture is the open-hand begging gesture, used by both apes and humans. This gesture can be used for food, if there is food around, but it also can be used to beg for help, for support, for money and so on. It's meaning is context-dependent,"said Frans de Waal, one of the authors of the paper.

I'm most interested by the commonality of certain hand gestures between these ape species and ourselves; the begging example given above, for one. It seems that some aspects of our behavior have been hard-wired in since before the human race could have been said to exist.

For developers, Windows Live now means business

Microsoft wants to be a part of the next great web startup. This week at MIX07, the company modified the terms of its Windows Live application programming interface (API) license so that small businesses could freely use the services. HangZhou Night Net

The overview of the new license is as follows:

Microsoft is enabling access to a broad set of Windows Live Platform services with a single, easy-to-understand pricing model based on the number of unique users (UUs) accessing your site or Web application. These terms are intended to remove costs associated with many Web applications and provide predictable costs for larger Web applications. There are some exceptions to the UU-based model: (1) Search: free up to 750,000 search queries/month, (2) Virtual Earth: free up to 3 million map tiles/month; and (3) Silverlight Streaming: free up to 4GB storage and unlimited outbound streaming, and no limit on the number of users that can view those streams.

According to the terms of use, if a site has over 1 million unique users, it will be charged US$0.25 per unique user per year or it must share a portion of its advertising revenue with Microsoft. Search and Virtual Earth do not apply to in this scenario as commercial agreements are necessary when the limits of the two services are reached.

According to Microsoft, the license restructuring has been done to show that the company can and does support small businesses. Whitney Burk, a spokesperson for Microsoft's Online Services Group, said that Microsoft wants to be there when the next great startup company emerges. "We're saying to all those small guys out there, bet your business on Microsoft. If you become the next YouTube, great news for you and great news for us."

Because some of the underlying services provided by the APIs are still in beta, Microsoft is currently not enforcing the new pricing schema. However, even with the fee, the APIs are still a bargain. The two that I've used the most, Search and Virtual Earth, have clear documentation, excellent examples, and are straightforward to use.

With the new terms of use in place, businesses will be able to create and profit from their Windows Live mashups, and I wouldn't doubt that companies will create applications far more powerful than anything available in Windows Live right now. As a matter of fact, I'm predicting that Windows Live will almost solely be made of APIs in two years.

The mystery of entanglement deepens

Pre-reading warning: this will make your head hurt, and if it doesn't, you probably misunderstood it. 🙂 HangZhou Night Net

One of the key mysteries of quantum mechanics is called entanglement. Imagine some crystal that (somehow) emits two photons via a single process: the state of the photons will be correlated. If you then manipulate the state of one photon: the state of the other photon will be instantaneously changed as well, independent of the distance separating them. Although entanglement cannot be used to transmit information, it is a critical part of quantum computers. For computing, we rely on the entanglement to make the state of one qubit depend intrinsically on the state of other qubits. However, entanglement is very delicate and this mysterious linkage between two particles can be easily destroyed by interactions with its surroundings.

Experimental research1 to be published in Science shows that entanglement can destroy itself even in the absence of environmental noise. I won't describe the actual experiment here, but essentially the researchers created entangled pairs of photons and subjected them to a controlled amount of noise. Afterwards, the state of the photons was measured to see how well entangled they were. What they discovered is that entanglement can just simply disappear, even when the amount of noise suggests that it should remain.

In physical systems, the probability per unit time of an event occurring—such as entanglement vanishing—is often constant. This means that a long tail is always present, which is often useful in many experiments and engineering systems, because it means you have a known amount of time in which things can be done. Apparently, entanglement does not always disappear gracefully, but rather stomps off in a huff before the party is half over.

Additionally, the researchers noted that different entangled states evolve very differently. They show data where the entanglement between two states decays gracefully, while for others it disappears very quickly, despite the starting states being very similar.

These two findings have serious implications for quantum encryption and quantum computing, both of which rely on entanglement. For these applications to advance, stable and long lasting entanglement is required. Being able to choose the system so that only certain entangled states are produced will probably turn out to be quite challenging.

1 First author: M. P. Almeida

Climate: Life in the twilight zone

In the coming issue of Science, there is a piece of research that contributes to our understanding of the climate. HangZhou Night Net

The paper1 deals with how the ocean can act as a carbon store and how big that store might be. The ocean interacts with carbon in three basic ways. First, there is carbon dioxide that is dissolved in the water. This carbon is in equilibrium with the surrounding atmosphere, and the water cannot be thought of as a storage room for carbon. The second is in ocean life, which is carbon based. However, when organisms die, they enter into a complex cycle that ultimately leads to part of the carbon being recycled into the atmosphere and some being stored on the ocean floor. In this way, life in the ocean is not itself considered a carbon store, but rather a stepping stone on the way to storing carbon. The question is, of course, how much carbon makes it out of the ocean life cycle to end up on the sea floor?

The amount of carbon recaptured by ocean life as the dead organisms sink to the ocean floor has, up to now, been modeled as an exponential decay. The decay starts at the ocean surface and continues through the twilight zone, where ocean life is still abundant. After the twilight zone, no significant capture is considered to take place so the particulates reach the ocean floor and remain there. It is pretty clear that this is simply an "on average" model, which simply cannot take into account local conditions. The research in the linked paper reports on the variability of carbon storage between geographic locations. To do this, the researchers designed novel neutral buoyancy traps. These traps stay at a predetermined depth for a predetermined time, and catch particles as they fall. The traps are only open while at depth, so it is a true measure of the density of particulates falling towards the ocean floor at that depth. These traps were set in cold water, near the Arctic circle, and in warm water, near Hawaii, at multiple depths, and the measurements were repeated (a very expensive exercise for ship-based experiments).

They discovered that the simple model underestimates the ability of life to keep carbon in circulation. Using the measured temperature differences, the model estimates are approximately 11 petagrams per year (1 petagram = 1×1015 grams), while the actual collected particles indicate only 2.3-5.5 petagrams per year—a short fall of around 1 year of anthropogenic carbon. Although both sites showed a substantial amount of variability (20-50 percent), the measured variability is not enough to make up the short-fall.

What is of more concern is the observed temperature dependence, which shows increasingly poor carbon storage as the temperature increases—a positive feedback loop. Moreover, this will be likely to couple with other expected effects, such as an increase in stratification and increasing acidity.

In related news, the Guardian has summarized Mark Lynas' book called 6 Degrees. Lynas has gone through scientific literature of the past decade or so to compile a sort of compendium on what we can expect for each degree increase in temperature. It is intentionally scary—as it should be—and makes it clear that everyone will be affected by even a fairly small 1 degree Celsius increase in global average temperature.

1 First author: K. O. Buesseler

Rift-to-drift transition triggered catastrophic global warming

The Paleocene-Eocene thermal maximum (PETM) was a global disruption: mean temperatures rose 5-6° C as over 1,500 Gigatons of carbon entered the atmosphere. That carbon acidified the oceans, causing a major extinction of sea life. Both temperatures and carbon levels remained high for hundreds of thousands of years. But new data that will appear in the next issue of Science suggests the global disruption had a local cause: the break up of plates that produced the North Atlantic. HangZhou Night Net

The PETM occurred roughly 55 million years ago, and the timing suggested a possible link, as the split between Europe and Greenland occurred at roughly that time. These geological events are often accompanied by major volcanic activity, which will also tend to pump a lot of carbon into the atmosphere. But the evidence for volcanic activity at the time of the PETM is sparse, and the biggest volcanic event near that time occurred approximately 450,000 years after the PETM.

The new work focused on the ash from this later event (called Ash-17, and found in Denmark). The authors were able to show that ash from sites in Greenland dated from precisely the same period, and shared chemical properties with Ash-17, suggesting they were formed in the same event. Using this information, they then tracked the Greenland geologic column backwards in time towards the PETM.

They found that the time of the PETM marked a major transition in Greenland geology: the first appearance of rocks that bear the signature of having formed at a mid-ocean ridge. They also looked at data from the other side of the break up in the Faeroes Islands, and found that they showed an identical timing of the appearance of rock formed at a ridge. Thus, the data suggests that the PETM doesn't correspond with a major eruption, but rather with the onset of a new phase in tectonic activity. This "rift to drift" transition marked the point where the breakup of Greenland and Northern Europe was complete, and regular spreading at the Mid-Atlantic Ridge began.

If a major eruption didn't occur, how did all the carbon get there? The authors say that their data favor a previously proposed model in which the Mid-Atlantic ridge formed under a sediment-rich ocean basin. The sudden influx of magma and heat disrupted the sediment, and released the huge amount of stored carbon left there by millennia of ocean life. Once it hit the atmosphere, global temperatures spiked.

We come not to bury Kutaragi, but to praise him

Sony has dominated video game consoles since the launch of the first PlayStation in the mid-90s, and the company has long been known for top-quality consumer electronics. In the past few years, Sony has seen their electronics market share diminish due to the lower prices of competitors like Samsung and Toshiba, and their fortunes didn't improve with the release of their expensive and much-hyped PlayStation 3 to lukewarm reviews and diminishing sales. HangZhou Night Net

Sir Howard Stringer has been attempting to improve the fortunes of the company, and the stock is once again rising. In thisperiod of change we now learn that the head of SCEI and the "father of the PlayStation" Ken Kutaragi is stepping down. While theexact reason for the change is unknown, Sony's game's division has suffered heavy losses in recent quarters and there have been widespread reports of Kutaragi's inability to work with other Sony executivesfor positive change.

Theeasy jokes about "Crazy Ken's" notable quotes shouldn't take away the long list of accomplishments that Ken Kutaragi has enjoyed since joining Sony directly aftergraduating from the University of Electro-Communications in Tokyo. After seeing the promise of big profits in video games with the rise of the Famicom, he pushed for Sony's inclusion in the Super Famicom system via Sony's SPC700 sound chip.

His reputation as a maverick iswell-earned. After Nintendo snubbed Sony while they were working on a CD-ROM add-on for the Super Famicom, the betrayal caused Sony to launch their own system: the PlayStation. The disc-based system took off and cemented Sony's place in gaming history after the Sega Saturn failed to sell in high numbers and the Nintendo 64 was hampered by its cartridge-based technology. With the launch of the PlayStation 2, Sony stood high above their competitors with full backwards compatibilityfor the original PlayStation and what was (at the time) an inexpensive DVD player. Sony Computer Entertainment became one of the company's biggest profit centers, and Kutaragi enjoyed the ride with a solid vision and some memorably wacky quotes.

Hecontinued to build hisreputation up until the launch of the PlayStation 3, claiming that the system would allow you to visit a "4D" world and that people will want to work harder to afford one. He also claimed that the system shouldn't be looked at as a games console. The inclusion of the Blu-ray drivedrove up the price and so far hasn't proved as strong of a sales motivator as the PlayStation 2's DVD drive.

Nintendo also proved to be a stronger competitor than Sony expected; the dominance of the Nintendo DS is a major obstacleto Sony's own portable, the PSP. The market has changed since the rise of Ken Kutaragi, and Sony now has to catch up. After a shaky US launch and before the European release, he was famously quoted admitting that Sony was losing their foothold in the market. "If you asked me if Sony's strength in hardware was in decline, right now I guess I would have to say that might be true," he said in an uncharacteristically candid moment.

Ken Kutaragi will be replaced by Kazuo Hirai, but will continue to work as a senior technology adviser for Sony. A shakeup in the command structure behind thePS3 may have a positive effect on future sales and strategy, turning thestruggling platform into a profitable business. Let's take this moment to thank Ken Kutaragi for his many innovative ideas and enthusiastic spirit in the world of gaming. Remember him every time you notice how great the Super Nintendo sounds, or how the PlayStation 2 led to the wider appeal of DVDs. We're looking forward to seeing his future projects.

Project Honey Pot springs $1 billion lawsuit on spammers

A "John Doe" lawsuit filed in the U.S. District Court in Alexandria, Virginia, this morning could be one of the largest anti-spam suits ever filed in the US so far. The suit was filed by Project Honey Pot, a free anti-spam service that collects information on e-mail address harvesters across thousands of sites on the Internet that have their software installed. The class-action complaint was filed on behalf of roughly 20,000 Internet users in more than 100 countries, according to the organization's web site. HangZhou Night Net

Because of webmasters large and small installing its software on their servers, Project Honey Pot has collected information on thousands of e-mail harvesters in the US—people or bots that automatically scan web sites for e-mail addresses and then store them in a database for sale to a spammer. The organization hopes that by filing the "John Doe" suit, they can use that information in conjunction with subpoenas to find out who the actual spammers are.

The lead attorney in the case is Jon Praed of the Internet Law Group. Praed has achieved quite the reputation as a "spam hunter" in recent years, as he has successfully represented AOL and Verizon against spammers.

Under Virginia's anti-spam statute and the federal CAN-SPAM law, Project Honey Pot's case could result in more than $1 billion in statutory damages against spammers. Although CAN-SPAM has been around since early 2004, the inability of lawmakers to find or identify the spammers in question has led to an increase in spam over the years instead of a decrease. However, Project Honey Pot's approach could actually yield some results, founder of myNetWatchman Lawrence Baldwin told the Washington Post. "If they're successful, I think it will yield some very usable information in terms of identifying who the real miscreants are. Let's just hope some of them are here in United States and therefore reachable," he said.

Project Honey Pot appears to be fully committed to the fight for its users, and although they acknowledge that spam won't go away even if the case succeeds, they hope that the case will help scare spammers in the future. The organization even says that should it win, it may give back to its community: "Since we will know what Project Honey Pot members provided the data that ends up winning the case, maybe we'll be able to send them a little bonus," wrote the company.

Controversial copyright directive passes European Parliament

The Second Intellectual Property Enforcement Directive (IPRED2), which we last covered earlier this month, came another step closer to passage yesterday, as it was approved by the European Parliament. The final version of the legislation removed some of the most controversial provisions, but critics still questioned why it was necessary to criminalize an area of law that has long been handled by the civil court system. The new directive mandates that those violating copyrights, trademarks, or certain other rights "on a commercial scale" or "inciting such infringements" be subject to fines up to €300,000 and up to a four years in jail. HangZhou Night Net

"Today, 'inciting' is only criminal in some member states, and in exceptional cases such as hate speech. Elevating IPRs to the same level is a scary development," noted Jonas Maebe, an analyst with the Foundation for a Free Information Infrastructure, in the wake of the vote. "The inciting clause is also reminiscent of the US 'Induce Act', which threatened to make MP3 players such as the iPod illegal."

The directive is ostensibly designed to crack down on commercial piracy and counterfeiting operations, but critics warned that, thanks to the vague terminology of the directive, it could apply much more widely. They note that no definitions are offered for the terms "incitement" or "commercial scale," opening the possibility that the courts could interpret them to include innovators building new media products. Those terms could be interpreted, for example, to hold ISPs liable for the infringing activities of their users.

Critics did succeed in removing some of the original proposal's most egregious problems. The final directive excluded patent infringement from criminal penalties. Given the murky and inconsistent state of European patent law, critics worried that entrepreneurs could find themselves facing jail time for accidentally infringing upon an obscure patent. An amendment was also adopted ensuring that fair use of copyrighted works would not be considered a criminal offense.

However, most other intellectual property rights in Europe would be enforced with criminal penalties. For example, the penalties could be applied to violators of geographic indication rights. That would seem to mean that a winemaker from outside the Champagne region of France could not only be sued but thrown in jail for selling his sparkling wine as "champagne."

Ren Bucholz of the Electronic Frontier Foundation notes that the relatively close vote—374 to 278—points to growing opposition to the directive across Europe. The directive will now go to the Council of the European Union, which is made up of representatives of the governments of each of the EU's member countries. Several member countries, led by the UK and the Netherlands, have expressed concerns about the directive. Bucholz notes that if the Council disagrees with the Parliament's decision, IPRED2 would go back to the European Parliament for further consideration.

Net neutrality advocates thank AT&T CEO for shooting off his mouth

The SavetheInternet coalition turned one year old this week and celebrated with… a press conference. While not the single most exciting approach to birthday parties the world has ever seen, a press conference provided an opportunity to reflect on all that has happened regarding network neutrality in only a year. It also provided a powerful reminder of why CEOs like AT&T's Ed Whitacre need to watch their mouths. HangZhou Night Net

Senator Byron Dorgan (D-ND), one of the driving forces behind the Senate's Dorgan/Snowe Net neutrality bill, joined the call to offer his thoughts on why a bill is needed. He recalled reading a quote last year from Ed Whitacre in BusinessWeek in which Whitacre complained about companies that used "his pipes" and did so "for free." That moment was illuminating for Dorgan. Even though he was raised in a small town (cue standard politician story about hardscrabble upbringing here), Dorgan said that "I can understand a pretty significant threat to the open architecture of the Internet." He thanked the coalition for its work, and said that he would seek hearings on the issue in the Senate Commerce Committee in the next few months.

Craig Newmark, who identified himself as the "customer service" person for Craigslist, took the microphone next. "The Internet has always been about playing fair," he said, adding that he hears from plenty of telecom employees who don't support what their bosses have said. Pretty much everyone is for net neutrality, Newmark said, except people running "fake grassroots campaigns and that sort of thing."

Tim Wu, a professor at Columbia who has been heavily involved in this issue, pointed out how much progress had been made in a year. We're now seeing a "sea change in telecom policy," he said, pointing out that Net neutrality has become an issue that people truly care about. It's become one of the first "third rails" in telecom policy, he said—any politician who comes near it gets "shocked by the electric reaction they receive from the public."

But perhaps most surprising was Michele Combs of the Christian Coalition, who claimed that Net neutrality had become a "true family issue." Who would have thought that standing up for traditional marriage and for unfettered access to Google would be two of the Christian Coalition's main issues in the upcoming presidential race? But that's exactly what's happening. Combs said that neutrality is "number two on our agenda" now, in large part because her group represents 100,000 churches, most of whom now use the web for everything from posting sermons to hosting online calendars to running e-mail lists. The churches fear that, without Net neutrality, it could get harder to access and distribute certain kinds of content.

The conference illustrated one of the movement's biggest successes, which has been its ability to assemble a truly diverse coalition that includes both the Christian Coalition and MoveOn.org, Craigslist and US senators. With Rep. Ed Markey (D-MA) set to introduce a neutrality bill in the House shortly, the issue promises to get a thorough hearing during this session of Congress.

Adobe liberates Flex source code

Adobe revealed plans today to release the source code of the Flex SDK and compiler under the Mozilla Public License (MPL). Flex—a cross-platform compatible framework for developing interactive Flash applications with XML and ActionScript—allows developers to construct Flash programs using idioms that are less overtly media-oriented and better suited for conventional software development. HangZhou Night Net

Availability of the source code, which will make it possible for independent developers to modify and improve Flex, could potentially cause a community of third-party contributors to emerge around the platform. Source code availability will also make it easier for third parties to incorporate Flex support into existing development tools or build new tools based on Flex components.

David Mendels, senior vice president of Adobe's Enterprise and Developer Business unit, believes that allowing the open source community to participate in Flex development will promote innovation. "The definition and evolution of Flex has been influenced by our incredibly talented developer community from day one," says Mendel. "The decision to open source Flex was a completely natural next step. I am incredibly excited to deeply collaborate with the developer community on Flex, and further fuel its momentum and innovation."

This is yet another positive sign of Adobe's willingness to work with the open-source community in the service of common goals and interests. Last year, Adobe began to collaborate with Mozilla developers to create an open-source ECMAScript 4 implementation for Firefox based on the newly-opened source code of Flash's high-performance ActionScript Virtual Machine, described by Mozilla CTO Brendan Eich as "the largest contribution to the Mozilla Foundation since its inception."

Although the availability of the Flex SDK source code is a big win for users and developers who are already using Flex, it isn't guaranteed to attract the interest of the broader open-source community. Flex applications run on top of Flash or Apollo, which are both still proprietary runtime components. Many in the open-source community are already committed to XUL, which facilitates standards-oriented development with XML and Javascript and uses cross-platform compatible Mozilla technologies.

Some source code is already included in the Flex 2.0 SDK, but the process of completely opening the SDK, compilers, debugger, and other relevant components will continue gradually through the rest of the year.

Mozy beta for Mac OS X

Mac Mozy UI

Berkeley Data Systems is now offering online backup for Mac users with the Mac Mozy beta. The application and service provide 2GBs of free storage and the security of 128-bit SSL and 448-bit Blowfish encryption, with an unlimited option for $5 a month. Participation in the beta program requires name and e-mail, and a couple of survey questions you can lie about, but you really shouldn't. As someone who just signed up, I'm already impressed. The beta of Mac Mozy is what .Mac Backup could have been, if it hadn't been developed by the Punishment Group at Apple. HangZhou Night Net

Using Mac Mozy is pretty easy. After installing the client, the Setup Assistant guides you through the initial configuration, including choosing the encryption key. From there, it's simply a matter of choosing files, folders, and backup sets. Backup sets include items such as mail messages, contacts, bookmarks, keychains, and so on. Once you do that, it's just a matter of scheduling, which is pretty lean at this point, either an automatic setting based upon how long the computer is inactive, or daily at a set time. The application works in the background to perform incremental backups. A Menubar icon provides you with status of the update.

While Mac Mozy "just works," there are a couple of issues to be aware. First, the service only keeps a 30-day version archive, which may be an issue for the anal retentive. Second, you need to log into the website to restore files, which is kind of annoying. Access through the client would be nice. Finally, the Windows version has bandwidth throttling for uploads, something that will hopefully be coming to Mac Mozy.

Even with these caveats, Mac Mozy is nice, especially considering it's a beta. While it's probably not a good idea to do away with manual backups—especially with beta software—Mac Mozy has a lot of potential. As a beta, Mac Mozy is a free backup solution that puts .Mac and Backup to shame. Bring on the RC.

Jobs puts the kibosh on iTunes subscriptions

Before I say anything else about the possibility of an iTunes subscription service, keep in mind that for the purposes of this post, "subscription service" is referring to a subscription service for music only. A movie rental or subscription service for the Apple TV would be a massive selling point, so I suspect Apple is trying to cook one of those up. Music subscriptions are quite a different story, however. HangZhou Night Net

As you may or may not know, Apple will be renewing its iTunes contracts with the major music labels over the next few months. The music companies, always anxious for extra revenue, are hoping that iTunes will introduce a subscription model similar to that of Rhapsody or Napster. This would allow them to "rent" music for a monthly fee, and would lead to revenue boost due to the recurring subscription charges.

Despite recent rumors, Steve Jobs isn't having it. He has said that "the subscription model has failed so far," and has suggested that customers generally prefer the iTunes sale model to any type of subscription service. I'm probably not qualified to speculate on whether or not iTunes users would prefer a subscription service, but given the strong sales on the iTunes Store over the past few years, I don't see why Apple would want to offer more than one type of pricing. Two pricing schemes would dilute the user experience (which as we all know, Apple is a big fan of) and make it less consistent, which is one more reason we won't see music subscription any time soon.

The big reason, though, is that a music subscription service is against most of what Steve Jobs has been preaching lately. Apple already has a contract with EMi to distribute DRM-free music, so a change to any type of subscription service would be a regression in many ways. It also wouldn't make much sense for Jobs to push his anti-DRM agenda and then allow users to enter into a music delivery scheme that is in many ways more restrictive.

I'm hopeful that we'll see more record companies agree to sell their music without DRM as a result of the upcoming negotiations. Sales on the iTunes Store have been good and EMI has already succumbed to the Apple's pressuring, so I suspect Steve will be tightening the thumbscrews to try and achieve his stated goal of having "half the songs on iTunes" available as DRM-free. If what some music industry execs said recently regarding Amazon's rumored music store is to be believed, that day could be coming sooner than later.

Previous Posts