About Generation 5
About the author
Hi! My name is Paul Houle and I’m a software developer who lives near Ithaca NY. I’ve been working on web applications and rich internet applications (RIA’s) for more than ten years. Although I’ve always been on the forefront of new technology, I’ve got a particular interest in aspects of computing that are timeless and that apply across different computing platforms.
To give you some samples of my work, I was writing Java applets such as Freedom VR and a Hysteresis Simulation around when Java was in beta. (These examples might work in your browser.) More recently I developed a 3-d rendering engine in Javascript that powers an interactive collection of polyhedra models; of course I’ve bolted Javascript functionality onto conventional web applications, most recently using Prototype and JQuery. I spent the summer of 2007 developing a rather complex GWT application that lets users edit a semantic web, and I’m currently developing a Silverlight application.
When I haven’t been developing RIA’s, I’ve been working on e-business, e-publishing and community building applications. I was the lead developer at arxiv.org for several years, and have webmastered hundreds of sites — a few of my recent projects are spoonriveranthology.net and Animal Photos! Just before I got into back into RIA’s, I was obsessed with the problems of developing quality CRUD apps cheaply and quickly, including project management and the use of modern frameworks such as Symfony, Rails and Spring.
About This Blog
I registered the “gen5.info” domain name a few years ago, planning to make a blog, but I started blogging seriously around the time that the Silverlight 2.0 beta came out. All http calls are asynchronous in Silverlight 2, which has required my team to literally turn our application inside out — fortunately, I had developed a library of patterns for asynchronous applications for my GWT application last summer, so I was able to hit the ground running. I knew a lot of developers were in the same situation, so I felt I had something useful to share.
About the name
Things don’t always go as planned.
I was planning on writing a programming blog when I registered the domain name, but about the theme of parallel and concurrent programming entering the mainstream. This was in 2005, when dual core processors were just entering the market — I’m writing this in 2008 on a dual-core laptop that is in no way a top of the line model.
Up until 1980 or so, people described 4 generations of computers that represented increasing densities of hardware integration:
- Vacuum Tubes (1940-1956)
- Transistors (1957-1963)
- Integrated Circuits (1963-1971)
- Microprocessors (1971-2005)
In 1980′s, a few visionaries saw the crisis that the microelectronics industry is in now: it would become impossible to increase the performance of a single-tasking processor by adding transistors and increasing the clock rate long before we’d be able to add more and more smaller transistors to our chips. Today we’re seeing the emergence of multi-core processors, system-on-chip as well as specialized parallel processors such as graphics processing units, network processors and the Cell processor used in the Playstation. In the meantime, you can buy clusters of computers from vendors as big as IBM and as small as Red Barn Computers: people are using clusters to do everything from scientific computing to serving web sites. These are all early forms of fifth-generation computers: parallel computers.
Around 1981, I remember reading a number of articles in the popular science press about how the Japanese were planning to build a “Fifth Generation Computer” that was going to be intelligent, understand natural language and take over the industry. This was in a time that US manufacturing was getting whipped by Japanese manufacturers, particularly in the automobile sector, so people in the US and Europe were a bit intimidated — and started their own efforts to develop parallel computers.
Parallel computers in the US were largely focused on scientific and engineering computing, particularly as applied to national defense. The Japanese took a different approach, funding a ten year effort to develop a parallel computer for running artificial intelligence (AI) applications. They first developed a special microprocessor specialized for running logic languages like Prolog, then developed a parallelizable logic language (Prolog isn’t) and then hitched hundreds of them together to build parallel machines.
Although the effort produced working hardware and software, it was regarded as a failure by many because it failed to produce commercially viable equipment. Mainstream microprocessors had increased in performance so rapidly as to make their specialized processors obsolete, while the world largely left logic programming and the associated approaches to artificial intelligence behind.
Computer Science, however, isn’t about producing viable products, but about producing ideas that lead to viable products years down the road. In fact, the organizers of the Fifth Generation Computer Project (FGCP) predicted that it would take another ten years for commercial fruits to emerge. Although the FGCP used an unusual microarchitecture, it ultimately adopted the cluster-of-SMPs communication model that has become the mainstream for massively parallel computers and demonstrated that search and information retrieval (IR) tasks could be parallelized.
IBM’s Deep Thought, a massively parallel AI, defeated Grandmaster Gary Kasparov at chess in 1997. Google’s web search is powered by massively parallel IR algorithms similar to that foreseen by the FGCP.
It’s a bit sad that the term “Fifth Generation Computer” fell out of fashion, because it points out that the current transition towards parallelism isn’t just about making things bigger and faster, but about being able to use computers to do things that we couldn’t do before. It’s something that’s exciting to be a part of, and I just might find the time to write about it someday.