I’m just leaving Japan after a day long conference sponsored in part by the Japanese National Institute of Informatics. The morning session was sponsored by Creative Commons Japan and consisted of six presentations by people using Creative Commons licenses, or in a couple cases, doing things that depend upon CC-like freedom.
Japan is one of my favorite places in the world, and I love any excuse to be here. But I had a strange deja vu as I listened to the stories of what people are doing here.
In the late 1990s, I travelled a bunch to South America to talk about cyberspace. In conference after conference, I listened to South Americans describe how they were waiting for the government to enact rules so they could begin to develop business in cyberspace. That reaction puzzled me, an American. As I explained to those who would listen, in America, business wasn’t waiting for the government to “clarify” rules. It was simply building business in cyberspace without any support from government.
Yet as I listened to the Japanese describe the stuff they were doing with content in cyberspace, I realized we (America) had become South America. One presentation in particular described an extraordinary database the NII had constructed to discover relevance in linked databases, and drive traffic across a database of texts. I was astonished by the demonstration, and thought to myself that we could never build something like this in the U.S., at least until cases like the Google Book Search case was resolved.
And bingo — the moment of recognition. We are now, as the South Americans in the 1990s, waiting for the government to clarify the rules. Investment is too uncertain; the liability too unclear. We thus wait, and fall further behind nations such as Japan, where the IP (as in copyright) bar is not so keen to stifle IP (as in the goose that …).
(Oh, and re broadband: NTT is now well on its way to rolling fiber to the home. Cost per home — between $30-50/m, for 100 megabits/s).
Are you going to China, Lessig?
i recently viewed the discussion at the NYPL about google book search with the panel of an author, a lawyer from the publishers assoc., the google guy and you. i saw one point that would nix one of the publishers’ problems.
the publisher assoc. does not want opt-out because everyone would start what google does and then they would have to opt-out of them all. the publishers should make a complete list of works they dont want searched, and then any system (like google, amazon, microsoft) can just grab the list and remove the results.
similiar to what the p2p people (napster and kazaa and others) have been asking of riaa and mpaa. but of course, no list of works has been produced.
We are now, as the South Americans in the 1990s, waiting for the government to clarify the rules. Investment is too uncertain; the liability too unclear. We thus wait, and fall further behind nations such as Japan, where the IP (as in copyright) bar is not so keen to stifle IP (as in the goose that …).
and what about the special interests that demand innovation stifling government regulations like net neutrality? or government mandates that require portability and interoperability? hmmm?
it seems to us mice that that it is business model killing initiatives like this which create uncertainty and scare away investment in the US of A.
it is not “unclear liabiity” which is a problem. liability is a manageable problem. what is unclear is profitibility; Swiss Re offers no solution for that.
nice to see your blog.
from your blog,i feel you’re busy all the day. one who know how to adhust himself could success. so keep yourself for your family and your friends.
It’s infuriating how much cheap bandwidth other countries are getting compared to the US. I’d love to find out what your average Japanese person is doing with 100 M downloads. Presumably it isn’t all going toward gaming.
I would like to know what actual speeds htese customers are geting. The last mile connection doesn’t mean much if the backend is so overbooked no one can stream a yahoo video.
poptones:
Here’s a good place to start:
NETS: Broadband in Japan
It’s 2004, but it’s a very thorough examination of broadband in Japan. Information on transmission loss per meter starts on page 22.
thanks commons music for the GREAT link. very interesting.
I don’t think you understand: it has nothing at all to do with “losses per meter” and everything to do with simple economics. You can have fiber running directly to every machine in tokyo, but if tokyo connects to the world through a single T1 line nobody in the entire city is going to do better than 1.5mbps service.
Selling fast bit rates is a common marketing tool, but in fact many providers so oversell their back end infrastructure the customers will never see maximum utilization of that bandwidth. As I’ve pointed out before: when I lived in LA and got DSL service I was one of the first in my neighborhood, and the service was fantastic for about three months. As more customers in the neighborhood were brought online that dslam was forced to route more and more traffic until it was so bad I couldn’t even stream a 100kbps video without interruption.
they finally began offering DSl where I live now – honestly, I was flabbergasted when I first heard about it. Anyway, most customers go through the phone company for their dsl. Since they want me to install special software just to connect to the damn net, I went with another, smaller regional provider. While others are amazed at their blistering 400kbps downloads on their 1.5mbps connections, I have no problem maxxing out the full 1.5mbps connection – in fact, I’m so pleased with the service I may pay another fifteeen bucks a month to go to 3mbps.
Understand? it’s all about the provider having sufficient aggregate bandwidth to supply its customers. This is ALWAYS a problem – it’s why colleges often talk of shutting down p2p ports off campus, and why some isps are talking about charging streaming providers more money to give them priority – if the local isps will invest in the caching technology to support their local customers, everybody wins: he aggregate bandwidth is more eefficiently utilized, the load on the hosting server (ie google video) is reduced, and the end user gets a *better* experience.