杭州夜生活,杭州楼凤,杭州桑拿论坛

Powered By Eshangpin!

Supreme Court ruling makes “obvious” patents harder to defend

In a decision issued today, the US Supreme Court reinvigorated the "obviousness test" used to determine whether a patent should be issued. Ruling in the case of KSR v. Teleflex, the Court found that the US Court of Appeals for the Federal Circuit, which handles patent appeals, had not been using a stringent-enough standard to determine whether a patent was infringing. HangZhou Night Net

At issue in KSR v. Teleflex is a gas pedal manufactured by KSR. The pedal has an electronic sensor that automatically adjusts its height to the height of the driver. Teleflex claimed that KSR's products infringed on a patent it held. KSR said that Teleflex's patent combining a sensor and a gas pedal was one that failed the obviousness test, and as such, should not have been granted.

Patent law appeared to be on KSR's side: 1952 legislation mandated that an invention could not be patented if a "person having ordinary skill in the art" would consider it obvious. KSR argued that the US Patent and Trademark Office should have denied Teleflex's patent, as it only combines components performing functions they were previously known to do. However, the Federal Circuit had adopted a higher standard, ruling that those challenging a patent had to show that there was a "teaching, suggestion, or motivation" tying the earlier inventions together.

KSR had plenty of support from the likes of Intel, Microsoft, Cisco, and GM, while Teleflex's supporters included GE, 3M, DuPont, and a number of other companies concerned that some of their patent holdings would be harmed should the Court side with KSR.

SCOTUS found KSR's arguments convincing, ruling that the Federal Circuit had failed to apply the obviousness test. "The results of ordinary innovation are not the subject of exclusive rights under the patent laws," Justice Anthony Kennedy wrote for the Court. "Were it otherwise, patents might stifle rather than promote the progress of useful arts."

The Supreme Court also said that the Federal Circuit's conception of a patent's obviousness was too narrow. "The Circuit first erred in holding that courts and patent examiners should look only to the problem the patentee was trying to solve," according to Justice Kennedy's opinion. "Second, the appeals court erred in assuming that a person of ordinary skill in the art attempting to solve a problem will be led only to those prior art elements designed to solve the same problem."

The end result is that Teleflex's patent has been invalidated and more importantly, the Federal Circuit will now have to pay closer attention to a patent's obviousness. That may be good news for Vonage in its appeal of a court's decision that its VoIP service infringes on three Verizon patents. Our analysis of the patents indicates that they, too, may fail the obviousness test.

More importantly, the Supreme Court ruling is good news for a patent system in dire need of fixing. New legislation introduced to Congress a couple of weeks ago is another attempt at a fix. The bill would streamline the patent appeal process while switching the US patent system from a first-to-invent to a first-to-file system. It would also cap the amount of damages that could be awarded for infringing patents.

Talk to the hand: chimps, bonobos and the development of language

Regardless of one's feelings regarding zoos, it doesn't take much time spent in the primate house to come away with a feeling of kinship to our closest living relatives. Although not human, we recognize in chimpanzees and bonobos some of the same traits we display. HangZhou Night Net

It's not an observation that escapes biologists, either. Researchers are often interested in the common behaviors and traits we share with other higher primates to give us clues as to the evolutionary origins of human intelligence. A new study published this week in PNAS from scientists at the Yerkes National Primate Research Center has looked at the use of hand gestures by chimpanzees and bonobos as a form of communication. The idea behind this study is to gain a better understanding of the roots of human language development.

Although both species of primate use vocalizations and facial expressions to communicate, they also use hand gestures. Unlike the vocalizations and facial expressions, however, hand gestures don't mean the same things to both chimpanzees and bonobos. They stem from, and are interpreted by, different parts of the brain.

The study involved looking at the different facial/vocal and manual displays from two groups of bonobos and two groups of chimpanzees. The researchers identified 31 different manual gestures, and 18 facial/vocal displays that related to a range of different behavioral activities such as grooming, feeding, playing, and so on. It turns out that the facial/vocal displays could be recognized regardless of whether the viewer belonged to the same group or even species.

But when it came to hand gestures, most interpretations were specific to individual groups; a chimpanzee from one group would not be expected to know that a certain hand signal used by group A meant "please groom me." Hand signals were also found to be context dependent: "A good example of a shared gesture is the open-hand begging gesture, used by both apes and humans. This gesture can be used for food, if there is food around, but it also can be used to beg for help, for support, for money and so on. It's meaning is context-dependent,"said Frans de Waal, one of the authors of the paper.

I'm most interested by the commonality of certain hand gestures between these ape species and ourselves; the begging example given above, for one. It seems that some aspects of our behavior have been hard-wired in since before the human race could have been said to exist.

For developers, Windows Live now means business

Microsoft wants to be a part of the next great web startup. This week at MIX07, the company modified the terms of its Windows Live application programming interface (API) license so that small businesses could freely use the services. HangZhou Night Net

The overview of the new license is as follows:

Microsoft is enabling access to a broad set of Windows Live Platform services with a single, easy-to-understand pricing model based on the number of unique users (UUs) accessing your site or Web application. These terms are intended to remove costs associated with many Web applications and provide predictable costs for larger Web applications. There are some exceptions to the UU-based model: (1) Search: free up to 750,000 search queries/month, (2) Virtual Earth: free up to 3 million map tiles/month; and (3) Silverlight Streaming: free up to 4GB storage and unlimited outbound streaming, and no limit on the number of users that can view those streams.

According to the terms of use, if a site has over 1 million unique users, it will be charged US$0.25 per unique user per year or it must share a portion of its advertising revenue with Microsoft. Search and Virtual Earth do not apply to in this scenario as commercial agreements are necessary when the limits of the two services are reached.

According to Microsoft, the license restructuring has been done to show that the company can and does support small businesses. Whitney Burk, a spokesperson for Microsoft's Online Services Group, said that Microsoft wants to be there when the next great startup company emerges. "We're saying to all those small guys out there, bet your business on Microsoft. If you become the next YouTube, great news for you and great news for us."

Because some of the underlying services provided by the APIs are still in beta, Microsoft is currently not enforcing the new pricing schema. However, even with the fee, the APIs are still a bargain. The two that I've used the most, Search and Virtual Earth, have clear documentation, excellent examples, and are straightforward to use.

With the new terms of use in place, businesses will be able to create and profit from their Windows Live mashups, and I wouldn't doubt that companies will create applications far more powerful than anything available in Windows Live right now. As a matter of fact, I'm predicting that Windows Live will almost solely be made of APIs in two years.

Apple’s board to Fred Anderson: NO YUO!

Former Apple CFO Fred Anderson tried to make some "interesting" accusations after settling with the SEC over the stock options debacle. In his public statement, Anderson specifically pointed to former general counsel Nancy Heinen and current CEO Steve Jobs as being responsible for the illegal backdating decisions. He said that Jobs assured him that everything was fine, even when Anderson warned him about the accounting problems. HangZhou Night Net

Well Apple's board will have absolutely none of that, it seems. Board members Bill Campbell, Millard Drexler, Al Gore, Arthur Levinston, Eric Schmidt, and Jerry York have got Steve's back (at least publicly, anyway). They issued a statement last night saying that they aren't going to get into blog wars a public debate with Fred Anderson over the issue. Here's the full text of what they had to say:

We are not going to enter into a public debate with Fred Anderson or his lawyer. Steve Jobs cooperated fully with Apple’s independent investigation and with the government’s investigation of stock option grants at Apple. The SEC investigated the matter thoroughly and its complaint speaks for itself, in terms of what it says, what it does not say, who it charges, and who it does not charge. We have complete confidence in the conclusions of Apple’s independent investigation, and in Steve’s integrity and his ability to lead Apple.

It's true that the SEC also issued a statement this week saying that Apple itself is not in hot water. Specifically, the SEC even cited Apple's cooperation and "prompt self-reporting." Is Jobs totally in the clear yet? Possibly not, but it certainly looks like the SEC is acting in his favor so far.

A bit of useful junk: mobile DNA and gene regulation

The mammalian genome appears to largely be a gene-free wasteland: only about 1.5 percent of it codes for proteins. Proteins aren't the only things that appear to matter, though, as five percent of the genome appears to be preserved under selective pressure. It's thought that the remainder of the sequences regulate genes, either by serving as sites for the attachment of regulatory proteins, or by producing regulatory RNAs. Much of the rest of the DNA appears to be junk, consisting of introns, bits of virus, and mobile DNA-based parasites called transposable elements. HangZhou Night Net

But that junk can sometimes be put to use. We've covered a number of cases where pieces of transposable elements have been put to use in coding for proteins, including a survey of how they progress from junk to utility. A paper that will appear in PNAS takes a historical perspective on how transposons wind up doing something useful, searching the entire genome for pieces of transposon that appear to be regulating genes. They took the genome of a series of mammals—humans, chimps, macaques, rats, mice, and dogs—and found the conserved regions that didn't code for proteins. With that in hand, they scanned the sequences for similarity to previously identified transposon sequences.

And they found them, over 10,000 of them in the human genome. All told, they account for just over a megabase of DNA, or .04 percent of the genome. Most of these did not include the entire transposon (which is typically a few kilobases). Instead, fragments of transposons, typically about 100 bases long, were conserved. A few other clear trends were apparent in this population of useful bits of transposon. For one, they were enriched near some of the more interesting genes: those involved in development and those that regulate the expression of other genes.

It didn't appear that a single, useful feature of the transposon was being consistently used. At different genes, different parts of the transposon were conserved and, even when the same part was present, its sequence could vary at different places. The most notable feature, however, may be where they wound up. In general, they were far more common in regions with very few genes, and were usually far from the genes themselves, typically 100 kilobases to a Megabase away.

It's not entirely clear what these data indicate. There is really no coherent reason why the preserved sequences should be biased towards development genes, or they should wind up so far from the coding portions of genes.

But it reinforces previous suggestions that transposons, by moving around the genome, may be generally useful as a source of novel sequences that may get incorporated into either the regulatory or protein coding portions of genes. That's not to say that each transposon is useful, of course—most of them are still junk (note that these useful ones account for .04 percent of the genome, while transposons in general account for nearly half of it). But this may be why these parasites are tolerated by just about every multicellular organism we've looked at.

Sony to join video sharing party on Friday

Sony has decided to go ahead and join the rest of the world in trying to launch its own video sharing service. The company announced its plans at a news conference in Tokyo today, saying that the service will be launched on Friday, first in Japan and later expanded out to other countries, according to Reuters. "This is part of Sony's quiet software revolution," said Sony CEO Howard Stringer at the news conference. HangZhou Night Net

Sony says that its service, which will be called eyeVio, will keep a close eye so that the content uploaded by users doesn't violate copyrights. The company didn't expand on exactly how it plans to do this—whether they will be forcing content providers to use takedown notices, or whether they will attempt to go the much-talked-about filtering route. Takedowns are what YouTube has now become infamous for, and it's not going over well with most content providers. Viacom, for example, sued the company for "brazen" copyright infringement by "allowing" their clips to keep returning to the site. Microsoft plans to take the same route as YouTube with Soapbox but has attempted to reach out to content providers about the takedowns before getting into the same mess that YouTube has found itself in.

"We believe there's a need for a clean and safe place where companies can place their advertisements," Sony spokesperson Takeshi Honma told Reuters in reference to monitoring the uploaded content. He also said that eyeVio will be free to all users, but that the company hopes to generate revenue in the future through advertising and partnerships with content providers.

Will eyeVio be able to compete in the already crowded market of video sharing sites? The big names like YouTube, Soapbox, and MySpace Video are already having a tough enough time competing with each other. Smaller sites like Revver are still struggling to keep up, too, and Will Ferrell's FunnyorDie is still working on maintaining popularity after its initial splash. And last month, News Corp. and NBC announced a huge partnership with various content providers to launch its own "YouTube killer" this summer. Sony may already be too late to the party unless the company can offer something unique to its users that the other sites don't have.

FCC tries (and fails) to define unacceptable TV violence

The FCC has just released its long-awaited report on the possibility of regulating television violence that might be seen by children, and the agency is confident that it can be done constitutionally. Crafting a definition of unacceptable violence, though difficult, might be possible, and the FCC could enforce it through "time channeling" and mandatory labeling. The report has already led pundits to ask: why doesn't the FCC trust parents to make these decisions? HangZhou Night Net

Actually, the various FCC commissioners insist that parents must remain the first line of defense. In a statement, Michael Copps said that "without their active involvement it is difficult to envision a successful cure for the violence virus." Fellow Democrat Jonathan Adelstein called parents the "first, last, and best line of defense against all forms of objectionable content, " and Chairman Kevin Martin agreed in almost identical language. According to the commissioners, the FCC has no desire to do the job of parents, but some personal comments made by Adelstein highlight the difficulty that parents face.

"I fully understand that it is my choice to turn the television on or off," he said, but pointed out that "a trailer for a news show or a promotion for a horror movie" can pop up at any time. "I'm sure my children are not the only ones who have difficulty sleeping after they are inadvertently exposed to violence on television," he added.

One of the most frightening statistics in the FCC report had to do not with violence, but with the amount of television that's being watched in the US. The average household has a television set turned on an average of8 hours and 11 minutes every single day, and children do a good deal of the watching. By the time most kids enter first grade, they've already seen three school years worth of TV programming.

Given that kids are watching plenty of television, Congress has expressed concern about the images they are exposed to. 39 members of the House asked the FCC to undertake an examination of television violence back in 2004, and yesterday's report was the result. The ACLU and the National Association of Broadcasters object to any attempt to come up with a definition of unacceptable violence, but the FCC quoted from a wide variety of studies indicating at least some form of linkage between violent behavior in children and violent images on television. This finding is supported by the American Medical Association, the American Psychological Association, and others, which agree that "children exposed to violent programming at a young age have a higher tendency for violent and aggressive behavior later in life than children who are not so exposed."

The report recognizes a constitutional right to create violent content, but the FCC points out that broadcasters (especially over the air broadcasters) enjoy more limited First Amendment protections than regular citizens because of their "uniquely pervasive presence in the lives of all Americans" and because they are so easily accessible to children. The Supreme Court, which agrees with these claims, has allowed the agency to regulate "indecent" content on television, and the FCC believes that a similar, narrow attempt to regulate violence on the public airwaves would also be acceptable.

Possible solutions

The report proposes two measures to regulate violence: time channeling and mandatory ratings. Time channeling simply involves moving all unacceptably violent content into a time slot in which children are unlikely to be watching—between 10 p.m. and 6 a.m., for instance. Ratings are already undertaken by the industry on a voluntary basis, but the FCC cites numerous complaints that the broadcasters "underrate" their programs, and calls for a mandatory ratings program that would produce more consistent results.

In addition, few parents use the "V-chip" built into televisions since 2000 in order to filter content, and the FCCwants"further action to enable viewer-initiated blocking a violent television content." Like most of the report's suggestions, this one remains vague—it's not entirely clear what's being suggested, except that the government do something.

The vagueness of the report led Commissioner Adelstein (who supported the report in general) to wonder, "Are we saying Law and Order should be banned during hours when children are watching? It is anyone's guess after reading this Report. The Report is not a model of clarity."

In his response, Chairman Kevin Martinoffered two more proposals for regulation. One is the reinstatement of a "family hour" at the beginning of prime-time in which only child-friendly shows and commercials would be aired. The second idea is one near and dear to his heart: a la carte cable and satellite programming, which would allow parents more control over content without requiring as much government intervention.

The report at least recognizes the importance of context to violence, and the issue of films like Schindler's List is explicitly considered. Still, even a well-crafted contextual definition leaves many people nervous, especially the broadcasters, who are worried that they will never know in advance if a particular show will be found infringing. The FCC concedes that "any definition would have to be sufficiently clear to provide fair notice to regulated entities," but it's an open question as to whether such a definition can in fact be crafted.

Adelstein is not convinced that it can. He notes that the report does not actually offer any definition; it only concludes that such a definition would be possible. "Given that we are not able to offer a definition ourselves," he commented, "it does not appear to be as easy to define as some suggest."

Rapid pulse time could make Z machine ready for fusion

Fusion-based power generation, were it to become practical in the near future, would probably end the debate on how best to avoid pumping any more carbon into the atmosphere. But fusion research has moved slowly; the biggest effort currently in the works, the ITER project, has frequently gotten bogged down in politics. HangZhou Night Net

ITER hopes to confine plasma as it is heated, eventually allowing it to reach a temperature and pressure where as sustained fusion reaction results. But there is a competing technology in which short bursts of fusion are generated repeatedly. This has been accomplished by focusing multiple extremely high-powered laser pulses on a small pellet containing fusible material. Sandia Lab's Z machine was designed to study the feasibility of this sort of system (although it does other cool stuff as well).

One of the biggest barriers between the Z machine and commercial fusion, however, is the time in between firings. It simply can't build back up to the power levels needed to trigger fusion fast enough to make commercial application possible. That's why their press release earlier this week is so significant: it describes new technology that could allow the Z machine to fire as often as every ten seconds, thought to be sufficient for commercial use.

The new system is called a linear transformer driver and was developed in a collaboration between Sandia and Russian researchers. It combines large capacitors in parallel into a toroidal array, each of which is capable of firing off a current of 0.5 megamperes. These could then be combined in series to produce the power necessary.

The technology has a number of advantages, primarily because it is relatively compact and efficient. Much of the wiring and the corresponding insulation can be removed from the Z machine if it is powered by linear transformer drivers. The big downside? No more of the "room full of blue lightning" images that we love reporting on the Z machine for. Possibly because of the reduced complexity and resistance, these devices appear to be reliable: one of the ones being tested has survived over 11,000 cycles.

The press release comes to a bit of an awkward end, in that spends much of its time describing the funding situation at Sandia, and how that may be standing between developing these linear transformer drivers and getting enough of them to actually use them to run the Z machine. Hopefully, a way can be found to get the money to where it's needed, and that costs will come down if more of these devices are ordered.

The mystery of entanglement deepens

Pre-reading warning: this will make your head hurt, and if it doesn't, you probably misunderstood it. 🙂 HangZhou Night Net

One of the key mysteries of quantum mechanics is called entanglement. Imagine some crystal that (somehow) emits two photons via a single process: the state of the photons will be correlated. If you then manipulate the state of one photon: the state of the other photon will be instantaneously changed as well, independent of the distance separating them. Although entanglement cannot be used to transmit information, it is a critical part of quantum computers. For computing, we rely on the entanglement to make the state of one qubit depend intrinsically on the state of other qubits. However, entanglement is very delicate and this mysterious linkage between two particles can be easily destroyed by interactions with its surroundings.

Experimental research1 to be published in Science shows that entanglement can destroy itself even in the absence of environmental noise. I won't describe the actual experiment here, but essentially the researchers created entangled pairs of photons and subjected them to a controlled amount of noise. Afterwards, the state of the photons was measured to see how well entangled they were. What they discovered is that entanglement can just simply disappear, even when the amount of noise suggests that it should remain.

In physical systems, the probability per unit time of an event occurring—such as entanglement vanishing—is often constant. This means that a long tail is always present, which is often useful in many experiments and engineering systems, because it means you have a known amount of time in which things can be done. Apparently, entanglement does not always disappear gracefully, but rather stomps off in a huff before the party is half over.

Additionally, the researchers noted that different entangled states evolve very differently. They show data where the entanglement between two states decays gracefully, while for others it disappears very quickly, despite the starting states being very similar.

These two findings have serious implications for quantum encryption and quantum computing, both of which rely on entanglement. For these applications to advance, stable and long lasting entanglement is required. Being able to choose the system so that only certain entangled states are produced will probably turn out to be quite challenging.

1 First author: M. P. Almeida

Climate: Life in the twilight zone

In the coming issue of Science, there is a piece of research that contributes to our understanding of the climate. HangZhou Night Net

The paper1 deals with how the ocean can act as a carbon store and how big that store might be. The ocean interacts with carbon in three basic ways. First, there is carbon dioxide that is dissolved in the water. This carbon is in equilibrium with the surrounding atmosphere, and the water cannot be thought of as a storage room for carbon. The second is in ocean life, which is carbon based. However, when organisms die, they enter into a complex cycle that ultimately leads to part of the carbon being recycled into the atmosphere and some being stored on the ocean floor. In this way, life in the ocean is not itself considered a carbon store, but rather a stepping stone on the way to storing carbon. The question is, of course, how much carbon makes it out of the ocean life cycle to end up on the sea floor?

The amount of carbon recaptured by ocean life as the dead organisms sink to the ocean floor has, up to now, been modeled as an exponential decay. The decay starts at the ocean surface and continues through the twilight zone, where ocean life is still abundant. After the twilight zone, no significant capture is considered to take place so the particulates reach the ocean floor and remain there. It is pretty clear that this is simply an "on average" model, which simply cannot take into account local conditions. The research in the linked paper reports on the variability of carbon storage between geographic locations. To do this, the researchers designed novel neutral buoyancy traps. These traps stay at a predetermined depth for a predetermined time, and catch particles as they fall. The traps are only open while at depth, so it is a true measure of the density of particulates falling towards the ocean floor at that depth. These traps were set in cold water, near the Arctic circle, and in warm water, near Hawaii, at multiple depths, and the measurements were repeated (a very expensive exercise for ship-based experiments).

They discovered that the simple model underestimates the ability of life to keep carbon in circulation. Using the measured temperature differences, the model estimates are approximately 11 petagrams per year (1 petagram = 1×1015 grams), while the actual collected particles indicate only 2.3-5.5 petagrams per year—a short fall of around 1 year of anthropogenic carbon. Although both sites showed a substantial amount of variability (20-50 percent), the measured variability is not enough to make up the short-fall.

What is of more concern is the observed temperature dependence, which shows increasingly poor carbon storage as the temperature increases—a positive feedback loop. Moreover, this will be likely to couple with other expected effects, such as an increase in stratification and increasing acidity.

In related news, the Guardian has summarized Mark Lynas' book called 6 Degrees. Lynas has gone through scientific literature of the past decade or so to compile a sort of compendium on what we can expect for each degree increase in temperature. It is intentionally scary—as it should be—and makes it clear that everyone will be affected by even a fairly small 1 degree Celsius increase in global average temperature.

1 First author: K. O. Buesseler

Rift-to-drift transition triggered catastrophic global warming

The Paleocene-Eocene thermal maximum (PETM) was a global disruption: mean temperatures rose 5-6° C as over 1,500 Gigatons of carbon entered the atmosphere. That carbon acidified the oceans, causing a major extinction of sea life. Both temperatures and carbon levels remained high for hundreds of thousands of years. But new data that will appear in the next issue of Science suggests the global disruption had a local cause: the break up of plates that produced the North Atlantic. HangZhou Night Net

The PETM occurred roughly 55 million years ago, and the timing suggested a possible link, as the split between Europe and Greenland occurred at roughly that time. These geological events are often accompanied by major volcanic activity, which will also tend to pump a lot of carbon into the atmosphere. But the evidence for volcanic activity at the time of the PETM is sparse, and the biggest volcanic event near that time occurred approximately 450,000 years after the PETM.

The new work focused on the ash from this later event (called Ash-17, and found in Denmark). The authors were able to show that ash from sites in Greenland dated from precisely the same period, and shared chemical properties with Ash-17, suggesting they were formed in the same event. Using this information, they then tracked the Greenland geologic column backwards in time towards the PETM.

They found that the time of the PETM marked a major transition in Greenland geology: the first appearance of rocks that bear the signature of having formed at a mid-ocean ridge. They also looked at data from the other side of the break up in the Faeroes Islands, and found that they showed an identical timing of the appearance of rock formed at a ridge. Thus, the data suggests that the PETM doesn't correspond with a major eruption, but rather with the onset of a new phase in tectonic activity. This "rift to drift" transition marked the point where the breakup of Greenland and Northern Europe was complete, and regular spreading at the Mid-Atlantic Ridge began.

If a major eruption didn't occur, how did all the carbon get there? The authors say that their data favor a previously proposed model in which the Mid-Atlantic ridge formed under a sediment-rich ocean basin. The sudden influx of magma and heat disrupted the sediment, and released the huge amount of stored carbon left there by millennia of ocean life. Once it hit the atmosphere, global temperatures spiked.

We come not to bury Kutaragi, but to praise him

Sony has dominated video game consoles since the launch of the first PlayStation in the mid-90s, and the company has long been known for top-quality consumer electronics. In the past few years, Sony has seen their electronics market share diminish due to the lower prices of competitors like Samsung and Toshiba, and their fortunes didn't improve with the release of their expensive and much-hyped PlayStation 3 to lukewarm reviews and diminishing sales. HangZhou Night Net

Sir Howard Stringer has been attempting to improve the fortunes of the company, and the stock is once again rising. In thisperiod of change we now learn that the head of SCEI and the "father of the PlayStation" Ken Kutaragi is stepping down. While theexact reason for the change is unknown, Sony's game's division has suffered heavy losses in recent quarters and there have been widespread reports of Kutaragi's inability to work with other Sony executivesfor positive change.

Theeasy jokes about "Crazy Ken's" notable quotes shouldn't take away the long list of accomplishments that Ken Kutaragi has enjoyed since joining Sony directly aftergraduating from the University of Electro-Communications in Tokyo. After seeing the promise of big profits in video games with the rise of the Famicom, he pushed for Sony's inclusion in the Super Famicom system via Sony's SPC700 sound chip.

His reputation as a maverick iswell-earned. After Nintendo snubbed Sony while they were working on a CD-ROM add-on for the Super Famicom, the betrayal caused Sony to launch their own system: the PlayStation. The disc-based system took off and cemented Sony's place in gaming history after the Sega Saturn failed to sell in high numbers and the Nintendo 64 was hampered by its cartridge-based technology. With the launch of the PlayStation 2, Sony stood high above their competitors with full backwards compatibilityfor the original PlayStation and what was (at the time) an inexpensive DVD player. Sony Computer Entertainment became one of the company's biggest profit centers, and Kutaragi enjoyed the ride with a solid vision and some memorably wacky quotes.

Hecontinued to build hisreputation up until the launch of the PlayStation 3, claiming that the system would allow you to visit a "4D" world and that people will want to work harder to afford one. He also claimed that the system shouldn't be looked at as a games console. The inclusion of the Blu-ray drivedrove up the price and so far hasn't proved as strong of a sales motivator as the PlayStation 2's DVD drive.

Nintendo also proved to be a stronger competitor than Sony expected; the dominance of the Nintendo DS is a major obstacleto Sony's own portable, the PSP. The market has changed since the rise of Ken Kutaragi, and Sony now has to catch up. After a shaky US launch and before the European release, he was famously quoted admitting that Sony was losing their foothold in the market. "If you asked me if Sony's strength in hardware was in decline, right now I guess I would have to say that might be true," he said in an uncharacteristically candid moment.

Ken Kutaragi will be replaced by Kazuo Hirai, but will continue to work as a senior technology adviser for Sony. A shakeup in the command structure behind thePS3 may have a positive effect on future sales and strategy, turning thestruggling platform into a profitable business. Let's take this moment to thank Ken Kutaragi for his many innovative ideas and enthusiastic spirit in the world of gaming. Remember him every time you notice how great the Super Nintendo sounds, or how the PlayStation 2 led to the wider appeal of DVDs. We're looking forward to seeing his future projects.

Project Honey Pot springs $1 billion lawsuit on spammers

A "John Doe" lawsuit filed in the U.S. District Court in Alexandria, Virginia, this morning could be one of the largest anti-spam suits ever filed in the US so far. The suit was filed by Project Honey Pot, a free anti-spam service that collects information on e-mail address harvesters across thousands of sites on the Internet that have their software installed. The class-action complaint was filed on behalf of roughly 20,000 Internet users in more than 100 countries, according to the organization's web site. HangZhou Night Net

Because of webmasters large and small installing its software on their servers, Project Honey Pot has collected information on thousands of e-mail harvesters in the US—people or bots that automatically scan web sites for e-mail addresses and then store them in a database for sale to a spammer. The organization hopes that by filing the "John Doe" suit, they can use that information in conjunction with subpoenas to find out who the actual spammers are.

The lead attorney in the case is Jon Praed of the Internet Law Group. Praed has achieved quite the reputation as a "spam hunter" in recent years, as he has successfully represented AOL and Verizon against spammers.

Under Virginia's anti-spam statute and the federal CAN-SPAM law, Project Honey Pot's case could result in more than $1 billion in statutory damages against spammers. Although CAN-SPAM has been around since early 2004, the inability of lawmakers to find or identify the spammers in question has led to an increase in spam over the years instead of a decrease. However, Project Honey Pot's approach could actually yield some results, founder of myNetWatchman Lawrence Baldwin told the Washington Post. "If they're successful, I think it will yield some very usable information in terms of identifying who the real miscreants are. Let's just hope some of them are here in United States and therefore reachable," he said.

Project Honey Pot appears to be fully committed to the fight for its users, and although they acknowledge that spam won't go away even if the case succeeds, they hope that the case will help scare spammers in the future. The organization even says that should it win, it may give back to its community: "Since we will know what Project Honey Pot members provided the data that ends up winning the case, maybe we'll be able to send them a little bonus," wrote the company.

Previous Posts