Model for High-Temperature Superconductivity

Модераторы: mike@in-russia, varlash

Аватара пользователя
Сообщения: 28699
Зарегистрирован: Вт май 17, 2005 18:44
Откуда: с Уралу
Контактная информация:

Model for High-Temperature Superconductivity

Номер сообщения:#1   morozov » Вс окт 15, 2006 0:19

Solving a Macroscopic Model for High-Temperature Superconductivity

ORNL researchers settle 20-year dispute in solid-state physics

Thomas Schulthess

The potential impact of a major
breakthrough in superconductivity
research is difficult to
overstate. Currently, electrical
utilities must transmit electricity in
excess of thousands of volts ? despite
the fact that actual electrical usage is
typically in volts or hundreds of volts
? simply because so much energy is
otherwise lost during transmission. If
we were able to build large-scale power
grids with superconducting materials
? that is, with materials that can carry
a current with zero resistance ? we
could generate and transmit power
over long distances at much smaller
voltages. This would eliminate enormous
amounts of wasted energy, render
entire segments of today?s electrical
transmission systems unnecessary,
allow us to systematically exploit alternative
energy sources, and dramatically
reduce the cost of energy worldwide.
Of course, this ideal has not yet been
achieved, primarily because no materials
currently exist that can become superconductive
at room temperatures.
Historically, scientists had to cool conventional
conductors close to absolute
zero to produce the phenomenon. Then,
in the 1980s, researchers discovered a
new class of materials that become
superconductive at much higher temperatures,
kicking off a new era of
exploration in the field. Although still
requiring very cold temperatures, these
new materials could be cooled with liquid
nitrogen, which is much easier and
less expensive to produce than the liquid
helium required by the old materials.
Today, many scientists and engineers
are engaged in research to develop practical
high-temperature superconductors
for power transmission and many other
applications.However, the phenomenon
of high-temperature superconductivity
is still poorly understood.While a
microscopic theory explaining conventional
superconductivity has existed for
half a century, most scientists agree that
it is not applicable to this new class of
materials. The most promising model
proposed to describe high-temperature
superconductivity, called the twodimensional
(2-D) Hubbard model, has
been unproven and controversial for
many years.
Now, thanks to new techniques developed
by a research team at the National
Center for Computational Sciences at
Oak Ridge National Laboratory
(ORNL), the 2-D Hubbard model has
finally been solved. By explicitly proving
that the model describes high-temperature
superconductivity, we have helped
to settle a debate that has raged for two
decades, and have opened the door to a
much deeper understanding of this phenomenon.
Even more significantly, the
work may mark an important step
toward the development of a canonical
set of equations for understanding the
behavior of materials at all scales.
С уважением, Морозов Валерий Борисович

Аватара пользователя
Сообщения: 28699
Зарегистрирован: Вт май 17, 2005 18:44
Откуда: с Уралу
Контактная информация:

Номер сообщения:#2   morozov » Вс окт 15, 2006 0:21

Understanding superconductivity

In normal materials, electrical resistance
causes energy loss during conduction
of a current. Superconductive
materials conduct electricity with no
resistance when cooled below a certain
temperature. Superconductivity is the
result of a transition to a new phase in a
material ? a macroscopic quantum
phenomenon ? in which a macroscopic
number of electrons (on the scale of
1023) condense into a coherent quantum
state. If this state is that of a current
flowing through a material, the current
will flow, theoretically indefinitely (provided
the material is kept in the superconducting
state), and the material will
be able to transmit electric power with
no energy loss.
Conventional superconductors were
first discovered in the early 20th century,
when Dutch physicist Heike Kamerlingh
Onnes cooled mercury to four degrees
Kelvin (-269 degrees Celsius), and
observed that the material?s electrical
resistance dropped to zero. Research into
conventional superconductors progressed
for several decades until 1957, when three
American physicists ? John Bardeen,
Leon Cooper and John Schrieffer ?
advanced the first strong mathematical
explanation for conventional superconductivity,
which came to be known as
BCS Theory. (This work won the physicists
the Nobel Prize in 1972.)
In the 1980s, a new class of superconductive
materials was discovered, which
become superconductive at much higher
temperatures. These new ceramic
materials ? copper-oxides, or cuprates,
combined with various other elements
? achieve superconductivity at temperatures
as high as 138 degrees Kelvin,
representing a major jump toward
room-temperature superconductors.
Physicists quickly recognized that superconductivity
in these new materials,
while ultimately producing the same
effect as conventional superconductivity,
was a very different phenomenon. As a
result, BCS Theory, which had served as
the standard model for describing
superconductivity for decades, was simply
not adequate to describe it.
С уважением, Морозов Валерий Борисович

Аватара пользователя
Сообщения: 28699
Зарегистрирован: Вт май 17, 2005 18:44
Откуда: с Уралу
Контактная информация:

Mystery of high-temperature superconductors

Номер сообщения:#3   morozov » Вс окт 15, 2006 0:23

Mystery of high-temperature

Within a few years of the discovery of
high-temperature superconductors, a
number of physicists suggested several
new mechanisms to describe this phenomenon
based on the 2-D Hubbard
model. This model is derived from
chemical structures, and purports to
describe superconductivity with a few
microscopically derivable parameters:
the probability that carriers (electrons
or holes) hop from one site
to another on a lattice of atoms,
an energy penalty for two carriers
to occupy the same site at once
(one can easily imagine that two
electrons repel each other)
the concentration of carriers.
However, there was disagreement
within the scientific community about
whether the model encompassed a
superconducting state in the typical
parameter and temperature range of the
cuprate superconductors and, as a
result, whether the model was appropriate
at all. (If the model did not include a
high-temperature superconducting
state, then it was not an appropriate
description of reality.)
At the time, the 2-D Hubbard model
was unsolvable due to the scale of the
computation required. Since superconductivity
is a macroscopic effect, any simulation
would need to encompass a lattice
on the scale of 1023 sites.At the same
time, since the model also must describe
the behavior of individual electrons hopping
from site to site and interacting with
each other, the simulation also had to
include calculations on the scale of a few
lattice spacings at a time. In short, the
model presented a difficult multi-scale
problem, making it extremely computationally
complex ? if not impossible ?
to solve numerically. Because of these difficulties,
the questions about the 2-D
Hubbard model remained unresolved for
more than a decade.
С уважением, Морозов Валерий Борисович

Аватара пользователя
Сообщения: 28699
Зарегистрирован: Вт май 17, 2005 18:44
Откуда: с Уралу
Контактная информация:

Номер сообщения:#4   morozov » Вс окт 15, 2006 0:26

Solving the 2-D Hubbard model

In the 1990s, one of the Center?s team
members,Mark Jarrell of the University
of Cincinnati, finally began to break this
deadlock with the development of
dynamical cluster approximation
(DCA), an extension of dynamical mean
field theory to systematically include
non-local correlations.Mean field theory
is the standard tool used in statistical
physics to describe multi-scale phenomena,
such as the interaction between an
entire system of particles and an individual
particle in the system. Dynamical
mean field theory is a quantum version
of this theory, allowing the study of
quantum fluctuations in one atom that
is embedded within many other atoms.
Jarrell?s innovation was a technique for
embedding a cluster of atoms ? on
which the many-body problem is solved
rigorously with quantum Monte Carlo
(QMC) simulations ? into a mean
field, allowing for the description of the
cluster?s interactions with the macroscopic
soup of particles in the system.
Using DCA and QMC techniques, the
team was finally able to simulate a correlated
system encompassing the macroscopic
scale, as well as the scale of a
small cluster of atoms within the system.
Jarrell worked with Thomas Maier of
ORNL to perform simulations using
these techniques with the QMC/DCA
code to solve for small clusters (foursites)
on an infinite lattice, using a
super-scalar computing system at
ORNL. The results were promising,
reproducing the phase diagram of the
cuprates, and demonstrating that the
model does achieve a superconductive
state at higher temperatures.However,
the work was not conclusive because it
also produced a magnetically ordered
state at finite temperatures that seemed
to be an artifact of the mean field
approximation. (Magnetic ordering at
finite temperature in the 2-D Hubbard
model is forbidden by a mathematical
theorem, the Mermin-Wagner theorem.)
Since the magnetic ordering was a
result of the very small clusters used in
the simulations, it remained to be seen if
the superconductivity was also an artifact
of the small clusters and the meanfield
approximation.However, the
super-scalar computer system used at
the time was simply not adequate to run
QMC simulations for larger clusters.
С уважением, Морозов Валерий Борисович

Аватара пользователя
Сообщения: 28699
Зарегистрирован: Вт май 17, 2005 18:44
Откуда: с Уралу
Контактная информация:

Номер сообщения:#5   morozov » Вс окт 15, 2006 0:27

Employing a vector supercomputer

Frequently, an application cannot take
advantage of a massively parallel system
because its physical domain is too small
to be distributed over a large number of
processors. In ORNL?s case, the
QMC/DCA code (specifically, an individual
Markov chain in the Monte Carlo
simulation) does not scale well on scalar
processors, and the algorithm the team
was using became too computationally
inefficient when expanded beyond foursite
clusters. The algorithm includes a
step at which the system must perform
an outer-product. As the vectors became
larger, the cache of the scalar system was
not able to keep up, causing the performance
to decline drastically. At that
point, adding parallel processors offered
no advantages. The team needed a different
type of supercomputer.
Given the limited scalability of the
application, the researchers needed a
system with very high performance and
high memory bandwidth on the processors
to supply as many floating-point
operations as possible on fewer processors.
They felt the best approach to this
problem was to employ a system with
the fastest vector processors available.
The team began using the Cray X1 system
and later moved to the Cray X1E
system at ORNL.
The X1E supercomputer is a dual
processor implementation of the X1 system.
With the doubling of the number
of processors on a board and a higher
clock speed, the X1E is able to deliver
more than twice the performance in the
same package as the X1. It is binary
compatible to the X1 system and uses
multi-streaming vector processors
(MSP) to achieve very high performance
per processor. The multi-streaming
architecture gives the programmer yet
another level of parallelism to facilitate a
higher sustained peak performance.
ORNL?s Cray X1E used in this study
contains 1024 MSP units each capable
of 18 GFLOPS.
It was necessary to perform two
major areas of computation:
level-3 basic linear algebra subprograms
(BLAS), which are highly
computationally-intensive and typically
achieve a very high fraction
of peak performance (80 to 90
level-2 BLAS, which are extremely
memory bandwidth-limited.With
the ability to fetch and store vector
arguments directly into the vector
registers, the Cray X1 and Cray
X1E systems offered much higher
memory bandwidth than typical
massively parallel systems.
Thanks to the system?s vector processors
and its high memory bandwidth,
calculations could be performed with
clusters of up to 30 atoms with only
minimal changes to the code. This scale
turned out to be enough to show that
the magnetic order that had plagued the
earlier work disappears monotonically as
the clusters grow larger, while the superconductivity
effect remains. In short, the
team proved for the first time that the
2-D Hubbard model does include a
high-temperature superconductive state
and is an accurate representation of the
phenomenon ? settling a 20-year
dispute in solid-state physics.
С уважением, Морозов Валерий Борисович

Аватара пользователя
Сообщения: 28699
Зарегистрирован: Вт май 17, 2005 18:44
Откуда: с Уралу
Контактная информация:

Номер сообщения:#6   morozov » Вс окт 15, 2006 0:37

Looking ahead

While solving the 2-D Hubbard
model represents an important step forward
in understanding the physics of
high-temperature superconductivity, we
still have much work to do in this area.
The next step is to demonstrate that a
generalized version of this model can
not only describe superconductivity in
principle, but also can accurately reflect
the behavior of specific materials and
explain why different materials become
superconductive at different temperatures.
One potential outcome of gaining
this level of understanding would be the
ability to design and produce even higher-
temperature superconductive compounds
that could be used in a variety
of applications ? from electric grids to
quantum supercomputers.
However, putting aside the practical
engineering possibilities, we believe this
work could have even more profound
ramifications by serving as a first step
toward a true canonical solution for
quantum problems in complex materials.
In the field of computational fluid
dynamics, for example, the Navier-
Stokes equations provide us with the
ability to solve a broad range of engineering
problems. For materials sciences,
chemistry and nanoscience, the
many-body Schrödinger equation plays
a similar role.However, in contrast to
Navier-Stokes, no canonical approach
currently exists to solve the many-body
Schrödinger equation. Finding one
would change solid-state physics forever,
and greatly expand the role of computation
in scientific discovery.

Thomas Schulthess is Group
Leader, Computational Material
Sciences and Nanomaterials Theory
Institute, ORNL. He can be reached at
С уважением, Морозов Валерий Борисович


Вернуться в «Физика твердого тела / Solid-state Physics»

Кто сейчас на конференции

Сейчас этот форум просматривают: нет зарегистрированных пользователей и 1 гость