Silicon 'neurons' may add a new dimension to chips

Energy constraints lead to novel ways of efficient, at-a-distance communication

When it fires, a neuron consumes significantly more energy than an equivalent computer operation. And yet, a network of coupled neurons can continuously learn, sense, and perform complex tasks at energy levels that are currently unattainable for even state-of-the-art processors.

What does a neuron do to save energy that a contemporary computer processing unit doesn't?

Supercomputer modeling by researchers at Washington University in St. Louis' McKelvey School of Engineering may provide an answer. Using simulated silicon "neurons," they found that energy constraints on a system, coupled with the intrinsic property neurons have to move to the lowest-energy configuration, leads to a dynamic, at-a-distance communication protocol that is both more robust and more energy-efficient than traditional computer processors. 

The research, from the lab of Shantanu Chakrabartty, the Clifford W. Murphy Professor in the Preston M. Green Department of Systems & Electrical Engineering, was published last month in the journal Frontiers in Neuroscience.

It's a case of doing more with less.

Ahana Gangopadhyay, a doctoral student in Chakrabartty's lab and a lead author on the paper, has been investigating computer models to study the energy constraints on silicon neurons -- artificially created neurons, connected by wires, that show the same dynamics and behavior as the neurons in our brains.

Like biological neurons, their silicon counterparts also depend on specific electrical conditions to fire, or spike. These spikes are the basis of neuronal communication, zipping back and forth, carrying information from neuron to neuron.

The researchers first looked at the energy constraints on a single neuron. Then a pair. Then, they added more. "We found there's a way to couple them where you can use some of these energy constraints, themselves, to create a virtual communication channel," Chakrabartty said.

A group of neurons operates under a common energy constraint. So, when a single neuron spikes, it necessarily affects the available energy -- not just for the neurons it's directly connected to, but for all others operating under the same energy constraint.

Spiking neurons thus create perturbations in the system, allowing each neuron to "know" which others are spiking, which are responding, and so on. It's as if the neurons were all embedded in a rubber sheet; a single ripple, caused by a spike, would affect them all. And like all physical processes, systems of silicon neurons tend to self-optimize to their least-energetic states while also being affected by the other neurons in the network.

These constraints come together to form a kind of secondary communication network, where additional information can be communicated through the dynamic but synchronized topology of spikes. It's like the rubber sheet vibrating in a synchronized rhythm in response to multiple spikes.

This topology carries with it information that is communicated, not just to the neurons that are physically connected, but to all neurons under the same energy constraint, including ones that are not physically connected.

Under the pressure of these constraints, Chakrabartty said, "They learn to form a network on the fly."

This makes for much more efficient communication than traditional computer processors, which lose most of their energy in the process of linear communication, where neuron A must first send a signal through B in order to communicate with C.

Using these silicon neurons for computer processors gives the best efficiency-to-processing speed tradeoff, Chakrabartty said. It will allow hardware designers to create systems to take advantage of this secondary network, computing not just linearly, but with the ability to perform additional computing on this secondary network of spikes.

The immediate next steps, however, are to create a simulator that can emulate billions of neurons. Then researchers will begin the process of building a physical chip.

U.S. Department of Defense Creating a Public Nuisance?

By Jane Genova, We’re the government and we’re here to help.  When government first says that, it probably is true.  Take the U.S. Department of Defense (DOD).  

Decades ago, DOD became involved with research and development of high performance computers (HPC) as a must-do in national security.  Those HPC applications support aeronautics, cryptography, and nuclear weapons design and testing.  Then, in the mid 1980s, DOD leveraged that expertise for U.S. economic competitive strength, including stimulating productivity and innovation in the private sector, ranging from manufacturing to energy [http://www.stormingmedia.us/15/15559/A155994.html].   

That was then.  Today, HPC is a stand-alone industry.  It’s business-as-usual in the financial markets, biological sciences, geo-sciences, and engineering.  Last year, despite the downturn, the high end grew by 25% to $3.4 billion, according to WORLDWIDE HIGH-PERFORMANCE TECHNICAL SERVER.

Yet, an irony tragically typical of government, DOD seems to be the source of problems in the HPC industry.  Those problems are of two types.  Together they could be impairing or harming the common good in such a way as to be considered legally and in terms of public policy a “public nuisance.”  Since 2005, I have been covering how the traditional theory of public nuisance is being applied to product liability, the environment and energy.  Recently, in “North Carolina v. TVA,” the plaintiff used the concept of public nuisance and won.    

One of the two problems is the lack of disclosure, which could be masking a conflict of interest.  That is the situation in the DOD publication of InsideHPC.com.  That online site is headed by DOD full-time employee John E. West.  On the “About Us” page, West identifies himself as employed “in a computing strategy role in R&D in the public sector.” Why isn’t DOD explicitly identified? A source informs me that West “works with Lockheed Martin as the systems integrator at the U.S. Army site.”  John Leidel, also involved with InsideHPC.com, the same source tells me “works for Convey Computer.”

When I presented this information to Christopher M. O’Neal, Chief Executive Officer of SuperComputerOnline.com, he noted, “FTC Disclosure Policies are designed to allow online communicators to self-identify any affiliation that may influence the content of the blog and allow readers to make their own judgment regarding the influence on content.”  Variations of disclosure rules regarding online content are working their way through state and federal courts.  At stake in those rulings is the credibility of digital as a medium as well as its commercial future.

Some might shrug “InsideHPC is only one of those dry government publications.”  But, Inside HPC.com is hardly that.  Actually it is slickly commercial.  In fact, it gushes on its website, “We aren’t just another ‘me too’ HPC news site, and we aren’t interested in letting ‘me too’ HPC companies reach our readers.”  Yet, this no me-too is paid for with tax-payer money.  West’s trips to HPC conferences are funded by tax-payers.  

Simultaneously, there are now private-sector HPC publications, ranging from SuperComputerOnline.com to HPCWire.com, which play in that same sandbox.    Only to play, all their expenses, including for trips to conferences, must come from their own pocket.  Those costs represent a minus from revenue.

That leads right in to the second argument supporting that DOD might be creating or contributing to a public nuisance in permitting InsideHPC.com to exist in its present form.  That second issue is: Why is this unfair competition with the private sector being permitted?  

Anyone who understands perception knows this: The DOD publication, being issued by the government, implies a type of official imprimatur.  In media we call that the “halo effect.”  That could position it, among private-sector publications, as unique in its authority and credibility.  Clearly, that is an unfair advantage it has over other HPC publications that should not exist.  

That brings up the core issue: Why would any government agency set itself up as a competitor with business?  Doesn’t that bring up the very question of what is the mission and function of government?

When government intrudes in this way, at best this creates redundancy.  At worst it is taking on private enterprise – and with built-in cost and influence advantages.  At the top of the list in not having to factor in many kinds of expenses which enterprises have to.  

There’s more.  InsideHPC now has a dedicated marketing and sales arm, including Mike Bernhardt, to sell ads.  Right now most of advertising is a zero-sum game.  What space InsideHPC sells, the private sector likely doesn’t.  Yet, most publications depend on advertising for profits.  

The Business Coalition for Fair Competition provides a white paper on what’s very dangerous with both these best and worst case scenarios [http:governmentcompetition.org/howgovtcompetes.html].  Isn’t this the other side of the coin of government agencies being too cozy with businesses? Unfortunately, this side too often remains invisible, due to lack of transparency.

U.S. government can be most helpful when it discerns its mission has been accomplished and it surrenders that function.   If it refuses to do just that, then it can be viewed in the legal and public policy light of creating a public nuisance.

Jane Genova blogged the Rhode Island lead paint public nuisance trial and its aftermath, beginning on syndicated site http://janegenova.com under “legal” and continuing on syndicated site http://lawandmore.typepad.com.  Then she expanded analysis of public nuisance to environmental and energy matters.  She has been interviewed on legal and policy issues by THE NEW YORK TIMES, CRAIN’S BUSINESS and PLAIN DEALER. Her posts are regularly linked to by THE WALL STREET JOURNAL, LEGAL TECHNOLOGY, PUBLIC NUISANCE, and NEW YORK Magazine.

Turbulence responsible for black holes' balancing act

New simulations reveal that turbulence created by jets of material ejected from the disks of the Universe’s largest black holes is responsible for halting star formation. Evan Scannapieco, an assistant professor in the School of Earth and Space Exploration in the College of Liberal Arts and Sciences at Arizona State University (ASU) and Professor Marcus Brueggen of Jacobs University in Bremen, Germany, present the new model in a paper in the journal Monthly Notices of the Royal Astronomical Society.
 
We live in a hierarchical Universe where small structures join into larger ones. Earth is a planet in our Solar System, the Solar System resides in the Milky Way Galaxy, and galaxies combine into groups and clusters. Clusters are the largest structures in the Universe, but sadly our knowledge of them is not proportional to their size. Researchers have long known that the gas in the centres of some galaxy clusters is rapidly cooling and condensing, but were puzzled why this condensed gas did not form into stars. Until recently, no model existed that successfully explained how this was possible.
 
Professor Scannapieco has spent much of his career studying the evolution of galaxies and clusters. “There are two types of clusters: cool-core clusters and non-cool core clusters,” he explains. “Non-cool core clusters haven’t been around long enough to cool, whereas cool-core clusters are rapidly cooling, although by our standards they are still very hot.”
 
X-ray telescopes have revolutionized our understanding of the activity occurring within cool-core clusters. Although these clusters can contain hundreds or even thousands of galaxies, they are mostly made up of a diffuse, but very hot gas known as the intracluster medium. This intergalactic gas is only visible to X-ray telescopes, which are able to map out its temperature and structure. These observations show that the diffuse gas is rapidly cooling into the centres of cool-core clusters.
 
At the core of each of these clusters is a black hole, billions of times more massive than the Sun. Some of the cooling medium makes its way down to a dense disk surrounding this black hole, some of it goes into the black hole itself, and some of it is shot outward. X-ray images clearly show jet-like bursts of ejected material, which occur in regular cycles.
 
But why were these outbursts so regular, and why did the cooling gas never drop to colder temperatures that lead to the formation of stars? Some unknown mechanism was creating an impressive balancing act.
 
“It looked like the jets coming from black holes were somehow responsible for stopping the cooling,” says Scannapieco, “but until now no one was able to determine how exactly.”
 
Scannapieco and Brueggen used the enormous supercomputers at ASU to develop their own three-dimensional simulation of the galaxy cluster surrounding one of the Universe’s biggest black holes. By adapting an approach developed by Guy Dimonte at Los Alamos National Laboratory and Robert Tipton at Lawrence Livermore National Laboratory, Scannapieco and Brueggen added the component of turbulence to the simulations, which was never accounted for in the past.
 
And that was the key ingredient.
 
Turbulence works in partnership with the black hole to maintain the balance. Without the turbulence, the jets coming from around the black hole would grow stronger and stronger, and the gas would cool catastrophically into a swarm of new stars. When turbulence is accounted for, the black hole not only balances the cooling, but goes through regular cycles of activity.
 
“When you have turbulent flow, you have random motions on all scales,” explains Scannapieco. “Each jet of material ejected from the disk creates turbulence that mixes everything together.”
 
Scannapieco and Brueggen’s results reveal that turbulence acts to effectively mix the heated region with its surroundings so that the cool gas can’t make it down to the black hole, thus preventing star formation.
 
Every time some cool gas reaches the black hole, it is shot out in a jet. This generates turbulence that mixes the hot gas with the cold gas. This mixture becomes so hot that it doesn’t accrete onto the black hole. The jet stops and there is nothing to drive the turbulence so it fades away. At that point, the hot gas no longer mixes with the cold gas, so the centre of the cluster cools, and more gas makes its way down to the black hole.
 
Before long, another jet forms and the gas is once again mixed together.
 
“We improved our simulations so that they could capture those tiny turbulent motions,” explains Scannapieco. “Even though we can’t see them, we can estimate what they would do. The time it takes for the turbulence to decay away is exactly the same amount of time observed between the outbursts.”
Each rack holds 52 Angstrom Microsystem-brand “blades,” with a memory footprint of 12 or 24 gigabytes each. (Photos by Olivia Bartlett Drake)
Each rack holds 52 Angstrom Microsystem-brand “blades,” with a memory footprint of 12 or 24 gigabytes each. (Photos by Olivia Bartlett Drake)

Blue Sky Studios Donates Animation SuperComputer to Wesleyan

Next fall, Wesleyan students and faculty will perform research activities on the same state-of-the-art animation computers that produced Ice Age the Meltdown, a $652 million worldwide box office hit.

The computer hardware was donated July 2 by Greenwich, Conn.-based Blue Sky Studios, the creator of a number of award-winning digital animation features, including the Ice Age series and Dr. Seuss’ Horton Hears a Who, which took in nearly $300 million worldwide.

In 2008, Blue Sky Studios refreshed their technology for their latest movie, Ice Age: Dawn of the Dinosaurs, and bought racks of new computers.

“The old computer racks still had a lot of life left in them, so we went looking for large colleges and universities in Connecticut that might be able to make use of this kind of computing infrastructure, and to which we might donate these computers,” explains Andrew Siegel, head of systems at Blue Sky Studios. “Wesleyan seemed like a good candidate.”

Blue Sky arranged for the racks to be delivered to the Exley Science Center loading dock. They are now housed on the fifth floor of Information Technology Services.

“We requested two, but they graciously gave us four,” Ganesan “Ravi” Ravishanker, associate vice president for Information Technology Services.

Each rack holds 52 Angstrom Microsystem-brand “blades,” with a memory footprint of 12 or 24 gigabytes each. Combined, Blue Sky donated about 3.7 terabytes of total memory.

“This is just phenomenal,” says Henk Meij, senior consultant and manager of Unix Systems Group. “Once it’s in full operation, it’s going to be much appreciated by the researchers. They’re definitely going to notice a difference in how fast research can be done.” 

One rack will be devoted to supporting high performance computing at Wesleyan. The current system allows up to 300 “jobs” to run at once. An additional 100 jobs will be able to run with the new rack, and at higher processing speeds.

“If a graduate student in astronomy wants to calculate planet rotations in a section of the galaxy, he or she will be able to do this much faster,” Meij says.

Another rack will be used primarily by ITS in a pilot project of building a virtualized data center using an entire rack. Services such as blogs, wikis, web servers and similar applications could be hosted in such an environment. When a hardware failure occurs, or one server experiences heavy loads, the virtualization layer would migrate the services to healthy servers automatically in the virtualized environment.

The additional two racks will be used to replace any hardware that fails in production system. Wesleyan would need additional cooling systems to run all four racks at once.

The high-speed animation computers feature 104 Centralized Processing Units (CPU) per rack. Each rack has a current market value of approximately $35,000. The University of Connecticut’s drama and computer science engineering departments also are each receiving two racks.

Blue Sky, a wholly owned subsidiary of Fox Filmed Entertainment, relocated to Connecticut from New York in January, bringing with it more than 300 jobs. The company, which continues to expand, said it was attracted to Connecticut because of the state’s efforts to promote the film industry.

“This is a tremendous gift for our students and for our state,” Governor Jodi Rell said in a statement. “The film industry has clearly found a home in Connecticut and we are grateful for Blue Sky’s commitment to Connecticut and partnership in helping us develop the next generation of skilled, educated industry professionals. This generous donation comes at a time when resources for so many worthwhile programs are stretched thin.”

Complex Concepts That Really Add Up

By Leyla Ezdinli -- An annual outreach program run by USC’s Collaboratory for Advanced Computing and Simulations is helping shape the future of computational science by encouraging members of underrepresented groups to pursue graduate work and research in scientific computing.

For the past eight years, the Computational Science Workshop for Underrepresented Groups has offered participants an opportunity to learn about complex research concepts in a hands-on and interdisciplinary environment. USC graduate student Amy Yuan and Roderick Brown, a participant at the Computational Science Workshop for Underrepresented Groups

The majority of participants are students and faculty members from small historically black colleges and universities with limited resources for research computing and curriculum development. For students, the workshop can have a profound influence on their choice of majors and careers. For faculty, the workshop offers the resources necessary to develop new courses and advance their research.

“The goal of the workshop is, in one week, to break the participants’ fear of computing and their ideas of parallel computing,” said Priya Vashishta, professor of materials science at the USC Viterbi School of Engineering, professor of physics at USC College and director of the Collaboratory for Advanced Computing and Simulations.

“We do not ask that people know about computing before they arrive — all we ask is that they have a good head on their shoulders,” he said.

Teaching parallel computing in a way that is comprehensible to those without a solid foundation in computer science and advanced mathematics is no small feat.

Parallel computing is a sophisticated form of computation in which a complex problem is divided into smaller problems that are then distributed to a cluster of networked computers for simultaneous processing. Parallel computing allows researchers to solve problems involving extremely large data sets much faster than would serial computation, in which operations are performed in a linear manner.

The workshop takes a novel and ambitious approach to teaching parallel computing. On the first day, each student assembles his or her own computer from components and installs the Linux operating system. All the computers in the workshop are then networked together to form a cluster.

Over the course of the week, students learn to write and compile code, write parallel codes, run programs on the cluster and analyze the cluster’s performance metrics.

The workshop was designed and developed by Rajiv Kalia, Aiichiro Nakano and Vashishta, the founders of the collaboratory, all of whom hold joint appointments in USC’s departments of physics and astronomy, chemical engineering and materials science, and computer science.

“This is an extremely intense workshop,” Nakano said. “It was Priya’s vision to have students build a supercomputer cluster from personal computers. He trained all of us,” said Nakano, referring to the collaboratory faculty and graduate students who organize and teach the workshop each year.