eel 4930 6 5930 5 spring 06 physical limits of computing n.
Download
Skip this Video
Loading SlideShow in 5 Seconds..
EEL 4930 §6 / 5930 §5, Spring ‘06 Physical Limits of Computing PowerPoint Presentation
Download Presentation
EEL 4930 §6 / 5930 §5, Spring ‘06 Physical Limits of Computing

Loading in 2 Seconds...

play fullscreen
1 / 92

EEL 4930 §6 / 5930 §5, Spring ‘06 Physical Limits of Computing - PowerPoint PPT Presentation


  • 109 Views
  • Uploaded on

http://www.eng.fsu.edu/~mpf. EEL 4930 §6 / 5930 §5, Spring ‘06 Physical Limits of Computing. Slides for a course taught by Michael P. Frank in the Department of Electrical & Computer Engineering. Course Introduction Moore’s Law vs. Modern Physics Foundations

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'EEL 4930 §6 / 5930 §5, Spring ‘06 Physical Limits of Computing' - baeddan


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
eel 4930 6 5930 5 spring 06 physical limits of computing

http://www.eng.fsu.edu/~mpf

EEL 4930 §6 / 5930 §5, Spring ‘06Physical Limits of Computing

Slides for a course taught byMichael P. Frankin the Department of Electrical & Computer Engineering

physical limits of computing course outline
Course Introduction

Moore’s Law vs. Modern Physics

Foundations

Required Background Material in Computing & Physics

Fundamentals

The Deep Relationships between Physics and Computation

IV. Core Principles

The two Revolutionary Paradigms of Physical Computation

V. Technologies

Present and Future Physical Mechanisms for the Practical Realization of Information Processing

VI. Conclusion

Physical Limits of ComputingCourse Outline

Currently I am working on writing up a set of course notes based on this outline,intended to someday evolve into a textbook

M. Frank, "Physical Limits of Computing"

part iii fundamentals the deep relationships between physics and computation
Part III. FundamentalsThe Deep Relationships Between Physics and Computation
  • Contents of Part III:
    • Chapter A. Interpreting Physics in Terms of Computation
      • 1. Entropy as (Nondecomputable) Physical Information
      • 2. Action as Physical Computational Effort
      • 3. Energy as Physical Computing Performance
      • 4. Temperature as Physical Computing Frequency
      • 5. Momentum as Directed Computing Density
      • 6. Relations Between Gravity and Information Density
    • Chapter B. Fundamental Physical Limits on Computing
      • 1. Information Propagation Velocity Limits
      • 2. Information Storage Density Limits
      • 3. Communication Bandwidth Limits
      • 4. Computational Energy Efficiency Limits
      • 5. Computational Performance Limits

M. Frank, "Physical Limits of Computing"

chapter iii a physics as computation

Chapter III.A. Physics as Computation

Reinterpreting Fundamental Physical Quantities in Computational Terms

physics as computation
Physics as Computation
  • We will argue for making the following identities, among others:
    • Physical entropy is the part of the total information content of a physical system that cannot be reversibly decomputed (to a standard state).
      • This was already argued in the thermodynamics module.
    • Physical energy (of a given type) is essentially the rate at which (quantum) computational operations of a corresponding type are taking place (or at least, at which computational effort is being exerted) in a physical system. It is a measure of physical computing activity.
      • Rest mass is the rate of updating of internal state information in proper time.
      • Kinetic energy is closely related to the rate of updating of positional information.
      • Potential energy is the total rate at which operations that carry out the exchange of virtual particles implementing forces are taking place.
      • Heat is that part of the energy that is operating on information that is entropy.
    • (Generalized) Temperatureis the rate of operations per unit of information.
      • I.e., ops/bit/t is effectively just the computational “clock frequency!”
      • Thermodynamic temperature is the temperature of information that is entropy.
    • The physical action of an energy measures the total number of quantum operations performed (or amount of quantum computational effort exerted) by that energy. An action is thus an amount of computation.
      • The action of the Lagrangian is the difference between the amount of “kinetic computation” vs. “potential computation.”
    • (Relativistic) physical momentum measures spatial-translation (“motional”) ops performed per unit distance traversed.
      • This definition makes this entire system consistent with special relativity.

M. Frank, "Physical Limits of Computing"

physics as computing 1 of 2
Physics as Computing (1 of 2)

M. Frank, "Physical Limits of Computing"

physics as computing 2 of 2
Physics as Computing (2 of 2)

M. Frank, "Physical Limits of Computing"

on the interpretation of energy as the rate of quantum computing

On the Interpretation of Energy as the Rate of Quantum Computing

Dr. Michael P. FrankDept. of Electrical & Computer Eng.FAMU-FSU College of Engineering

Summary of paper published in Quantum Information Processing, Springer, 4(4):283-334, Oct. 2005.

First presented in the talk “Physics as Computing,” at the Quantum Computation for Physical Modeling Workshop (QCPM ’04), Martha’s Vineyard, Wednesday, September 15, 2004

abstract
Abstract
  • Studying the physical limits of computing encourages us to think about physics in computational terms.
    • Viewing physics “as a computer” directly gives us limits on the computing capabilities of any machine that’s embedded within our physical world.
  • We want to understand what various physical quantities mean in a computational context.
    • Some answers so far:
      • Entropy = Unknown/incompressible information
      • Action = Amount of computational “work”
      • Energy = Rate of computing activity
      • Generalized temperature = “Clock frequency” (activity per bit)
      • Momentum = “Motional” computation per unit distance

Today’stopic

M. Frank, "Physical Limits of Computing"

energy as computing
Energy as Computing
  • Some history of the idea:
    • Earliest hints can be seen in the original Planck E=hν relation for light.
      • That is, an oscillation with a frequency of ν requires an energy at least hν.
    • Also suggestive is the energy-time uncertainty principle ∆E∆t ≥ /2.
      • Relates average energy uncertainty∆E to minimum time intervals ∆t.
    • Margolus & Levitin, Physica D 120:188-195 (1998).
      • Prove that a state of average energy E above the ground state takes at least time ∆t = h/4E to evolve to an orthogonal one.
        • Or (N-1)h/2NE, for a cycle of N mutually orthogonal states.
    • Lloyd, Nature 406:1047-1054, 31 Aug. 2000.
      • Uses that to calculate the maximum performance of a 1 kg “ultimate laptop.”
    • Levitin, Toffoli, & Walton, quant-ph/0210076.
      • Investigate minimum time to perform a CNOT + phase rotation, given E.
    • Giovannetti, Lloyd, Maccone, Phys. Rev. A 67, 052109 (2003), quant-ph/0210197; also see quant-ph/0303085.
      • Tighter limits on time to reduce fidelity to a given level, taking into account both E and ∆E, amount of entanglement, and number of interaction terms.
  • These kinds of results prompt us to ask:
    • Is there some valid sense in which we can say that energy is computing?
      • And if so, what is it, exactly?
  • We’ll see this also relates to action as computation.

M. Frank, "Physical Limits of Computing"

energy as rate of phase rotation
Energy as Rate of Phase Rotation
  • Consider any quantum system whatsoever.
    • And consider any eigenvector |E (with eigenvalue E) of its Hamiltonian H
      • i.e., the Hermitian operator H that is the quantum representation of the system’s Hamiltonian energy function.
  • Remember, the state |E is therefore also an eigenvector of the system’s unitary time-evolution operator U = eiHt/.
    • Specifically, w. the eigen-equation U|E = (eiEt/)|E.
  • Thus, in any consistent wavefunction Ψ for this system, |E’s amplitude at any time t is given by Ψ(|E, t) = m·eiEt/.
    • It is a complex number with some fixed norm |Ψ|=m, phase-rotating in the complex plane at the angular frequency ω = (E/) rad = (E/h) circ = f.
  • In fact, the entirety of any system’s quantum dynamics can be summed up by saying that each of its energy eigenstates |E is just sitting there, phase-rotating at frequency E/h.
    • Thus we can say, energy is nothing other than a rate of phase rotation.
    • And, Planck’s constant his just 1 cycle of phase rotation, is 1 radian.
  • But, what happens to states other than energy eigenstates?
    • We’ll see that average energy still gives the average rate of rotation of the state coefficients in any basis.
      • We can say, energy is the rate of rotation of state vectors in Hilbert space!

M. Frank, "Physical Limits of Computing"

a simple example
A Simple Example:

|g

  • Consider a constant Hamiltonian with energy eigenstates |g (ground) and |x (excited), with eigenvalues 0,E.
    • That is, H|g=0, H|x=E|x.
    • E.g., in a 2d Hilbert space, H = E(1+σz)/2.
  • Consider the initial state |ψ0 = (|g+|x)·2−1/2.
    • c|x phase-rotates at rate ω|x = E ∕ .
    • In time 2E/h, rotates by θ=π.
    • The area swept out by c|x(t) is:
      • a|x = ½π(|c|x|2) =π∕ 4.
      • This is just ½ of a circle withradius r|x = 2−1/2.
    • Meanwhile, c|g is stationary.
      • Sweeps out zero area.
    • Total area: a=π∕ 4.

|x

i

a=π/4

θ=π

1

c|x

c|g

0

r = 2−1/2

M. Frank, "Physical Limits of Computing"

let s look at another basis
Let’s Look at Another Basis
  • Define a new basis |0, |1 with: |0=(|g+|x)·2−1/2, |1=(|g−|x)·2−1/2
  • Use the same initial state |ψ0 = |0.
    • Note the final state is |1.
  • Coefficients c|0(t) and c|1(t)trace out the pathshown to the right…
  • Note that the total areain this new basis is still π/4!
    • Area of a circle of radius ½.
  • Hmm, is this true for any basis? Yes!…
    • The rest of the paper shows why, and implications.

a = π/4

c|0

c|1

M. Frank, "Physical Limits of Computing"

action some terminology
Action: Some Terminology
  • A physical action is, most generally, the integral of an energy over time.
    • Or, along some “temporalizable” path.
  • Typical units of action: h or .
    • Correspond to angles of 1 circle and 1 radian, respectively.
  • Normally, the word “action” is reserved to refer specifically to the action of the LagrangianL.
    • This is the action in Hamilton’s “least action” principle.
  • However, note that we can broaden our usage a bit and equally well speak of the “action of” any quantity that has units of energy.
    • E.g., the action of the Hamiltonian H = L + p·v = L + p2/m.
  • Warning: I will use the word “action” in this more generalized sense!

M. Frank, "Physical Limits of Computing"

action as computation
Action as Computation
  • We will argue: Actionis computation.
    • That is, an amount of action corresponds exactly to an amount of physical quantum-computational “effort.”
      • Defined in an appropriate sense.
  • The type of action corresponds to the type of computational effort exerted, e.g.,
    • Action of the Hamiltonian = “All” computational effort.
    • Action of the Lagrangian = “Internal” computational effort.
    • Action of energy pv = “Motional” computational effort
  • We will show exactly what we mean by all this, mathematically…

M. Frank, "Physical Limits of Computing"

action of a time dependent hamiltonian
Action of a Time-Dependent Hamiltonian
  • Definition. Given any time-dependent Hermitian operator (Hamiltonian trajectory) H(t) defined over any continuous range of times including one labeled t=0, the cumulative action trajectory of H(t) defined over that range is the unique continuously time-dependent Hermitian operator A(t) such that A(t)=0 and U(t) = eiA(t) satisfies the time-dependent operator form of the Schrödinger equation, namely:
    • If H(t) commutes with its time-derivative everywhere throughout the given range (but not in general if it doesn’t), this definition can be simplified to
    • If H(t) = H = const., the definition simplifies further, to just A(t) = Ht.

(Using the opposite of the usual signconvention here.)

M. Frank, "Physical Limits of Computing"

some nice identities for a
Some Nice Identities for A
  • Consider applying an action operator A=A(t) itself to any “initial” state v0.
    • For any observable A, we’ll use shorthand like A[v0] = v0|A|v0.
  • It is easy to show that A[v0]is equal to all of the following:
    • In any basis {vj},
      • The quantum-average total phase-angle accumulation α of the coefficients ci of v’s components inthat basis{vi}, weighted by their instantaneous component probabilities (expression (1) below).
      • Exactly twice the net areaa swept out in the complex plane (relative to the complex origin) by v’s coefficients cj.
    • The line integral, along v’s trajectory, of the magnitude of the imaginary part of the inner product v | v + dv between adjacent states (2).
  • Note that the value of A[v0] therefore depends only on the specific trajectory v(t) that is taken by v0 itself,
    • and not on any other properties of the complete Hamiltonian that was used to implement that trajectory!
      • For example, it doesn’t depend on the energies H[u] of other states u that are orthogonal to v.

(2)

(1)

M. Frank, "Physical Limits of Computing"

area swept out in energy basis
Area swept out in energy basis

Imaginaryaxis

ci(t+dt)

ci(t)

0

Real axis

  • For a constant Hamiltonian,
    • By a coefficient ci ofan energy basisvector vi.
  • If ri=|ci|=1, the area swept out is ½ of the accumulatedphase angle.
    • For ri<1, note areais this times ri2.
  • Sum over i = ½ avg.phase angle accumulated= ½ action of Hamiltonian.

ri

M. Frank, "Physical Limits of Computing"

in other bases
In other bases…

Imag. axis

cj(t + dt)

cj(t)

dj

drj

0

Real axis

  • Both the phase and magnitude of each coefficient will change, in general…

- The area swept out is no longer just a corresponding fraction of a circular disc.

* It’s not immediately obvious that the sum of the areas swept out by all the cj’s will still be the same in the new basis.

- We’ll show that indeed it is.

M. Frank, "Physical Limits of Computing"

basis independence of a
Basis-Independence of a

cj′

daj

cj

rj′

dj

rj

0

  • Note that each cj(t) trajectory is just a sum of circular motions…
    • Namely, a linear superposition of the ci(t) motions…
  • Since each circular component motion is continuous and differentiable, so is their sum.
    • The trajectory is everywhere a smooth curve.
      • No sharp, undifferentiable corners.
  • Thus, in the limit of arbitrarily short time intervals, the path can always be treated as linear.
    • Area dajapproaches ½ the parallelogram area rj rj′ sin dθ = cj×cj′
      • “Cross product” of complex numbers considered as vectors
  • Use a handy complex identity:a*b = a·b + i(a×b)
  • Implies that daj = ½ Im[cj* cj′]
    • So, da = ½ Im[v†v′].
  • So da is basis-independent, since the inner product v†v′is!

M. Frank, "Physical Limits of Computing"

proving the identities in the time dependent case
Proving the Identities in the Time-Dependent Case
  • Over infinitesimal intervals dt, we know thatdα = ωdt = 2da = Imψ|ψ′ = A′[ψ] = ψ|H|ψdt.
    • Thus, even if H(t) is not constant, all these quantities remain equal when integrated over an entire range of times from 0 to any arbitrary value of t.
  • But, is it still true that A(t)[ψ(0)] = α for large t?
    • Apparently yes, because the trajectory A(τ) from 0 to t can seemingly be continuously deformed into a linearized one A(τ)=Hτ for which we already know this identity holds,
      • while leaving the overall A(t) the same throughout this process.
    • Throughout this continuous deformation process, α mod 2π can’t change for any eigenstate of U(t)while leaving the overall U(t)=eiA(t) the same, and α also can’t change discontinuously by a multiple of 2π, so it can never become at all unequal to its value in the linearized case.

M. Frank, "Physical Limits of Computing"

computational effort of a hamiltonian applied to a system
Computational Effort of a Hamiltonian Applied to a System
  • Suppose we’re given a time-dependent Hamiltonian H(t), a specific initial state v, and a time interval (t0=0, t)
    • We can of course compute the operator A(t) from H.
  • We’ll call A[v] the computational effort exerted according to the specific action operator A (or “by” H acting from t0 to t) on the initial state v.
    • Later we will see some reasons why this identification makes sense.
      • For now, take it as a definition of what we mean by “computational effort”
  • If we are given only a set V of possible initial vectors,
    • The (maximum, minimum) work of A (or H from t0 to t) is (5)
  • If we had a prob. dist. over V (or equiv., a mixed state ρ),
    • we could instead discuss the expected work (6) of A acting on V;

(5)

(6)

M. Frank, "Physical Limits of Computing"

computational difficulty of causing a desired change
Computational Difficulty of Causing a Desired Change
  • If we are interested in taking v0 to v1, and we have a set  of available action operators A (implied, perhaps, by a set of available Hamiltonians H(t))
    • we define the minimum effort or difficulty of getting from v0 to v1, (7)
      • Maximizing over  isn’t very meaningful, since it may often yield ∞.
  • And if we have a desired unitary transform U that we wish to perform on any of a set of vectors V, given a set  of available action operators,
    • Then we can define the minimum (over ) worst-case (over V) effort to perform U, or “worst-case difficulty of U” (8).
      • Similarly, we could discuss the best-case effort to do U.
    • or (if we have vector probabilities) the minimum (over ) expected (over V) effort to do U, or “expected worst-case difficulty of U”(9).

(8)

(7)

(9)

M. Frank, "Physical Limits of Computing"

the justification for all this
The Justification for All This…
  • Why do we insist on referring to these concepts as “computational effort” or “computational difficulty?”
    • One could imagine other possible terms, such as “amount of change,” “physical effort,” the original “action of the Hamiltonian” etc.
      • What is so gosh-darned “computational” about this concept?
  • Answer: We can use these concepts to quantify the size or difficulty of, say, quantum logic-gate operations.
    • And by extension, classical reversible operations embedded in quantum operations…
      • And by extension, classical irreversible Boolean ops, embedded within classical reversible gates with disposable ancillas…
    • As well as larger computations composed from such primitives.
  • The difficulty of a given computational op (considered as a unitary U) is given by its minimized effort over )…
    • We can meaningfully discuss an operation’s minimum, maximum, or expected effort over a given space of possible input states.

M. Frank, "Physical Limits of Computing"

but you say hamiltonian energy is only defined up to an additive constant
But, you say, Hamiltonian energy is only defined up to an additive constant…
  • Still, the difficulty of a given U can be a well-defined (and non-negative) quantity, IF…
    • We adopt an appropriate and meaningful zero of energy!
  • One useful convention:
    • Define the least eigenvalue (ground state energy) of H to be 0.
      • This ensures that energies are always positive.
  • However, we might want to do something different than this in some cases…
    • E.g., if the ground-state energy varies, and it includes energy that had to be explicitly transferred in from another subsystem…
  • Another possible convention:
    • We could count total gravitating mass-energy…
  • Anyway, let’s agree, at least, to just always make sure that all energies are positive, OK?
    • Then the action is always positive, and we don’t have to worry about trying to make sense of a negative “amount of computational work.”

M. Frank, "Physical Limits of Computing"

energy as computing1
Energy as Computing
  • Given that “Action is computation,”
    • That is, amount of computation,
      • where the suffix “-ation” denotes a noun,
        • i.e., the act itself,
    • What, now, is energy?
  • Answer: Energy is computing.
    • By which I mean, “rate of computing activity.”
      • The suffix “-ing” denotes a verb,
        • the (temporal) carrying out of an action…
  • This should be clear, since note thatH(t)[ψ(t)] = dA/dt[ψ(0)] = Imψ(t)|ψ(t+dt) = dα(t)/dt…
    • Thus, the instantaneous Hamiltonian energy of any given state is exactly the rate at which computational effort is being (or would be) exerted “on” (or “by,” if you prefer) that state.

M. Frank, "Physical Limits of Computing"

applications of the concept
Applications of the Concept
  • How is all this useful?
    • It lets us calculate time/energy tradeoffs for performing operations of interest.
    • It can help us find (or define) lower bounds on the number of operations of a given type needed to carry out a desired computation.
    • It can tell us that a given implementation of some computation is optimal.

M. Frank, "Physical Limits of Computing"

time energy tradeoffs
Time/Energy Tradeoffs
  • Suppose you determine that the difficulty of a desired v1→v2 or U(V) (given the available actions ) is A.
    • For a multi-element state set V, this could be a minimum, maximum, or expected difficulty…
  • And, suppose the energy that is available to invest in the system in question is at most E.
    • This then tells you directly that the minimum/maximum/expected (resp.) time to perform the desired transformation will be t ≥ A/E.
      • To achieve equality might require varying the energy of the state over time, if the optimal available H(t) says to do so…
  • Conversely, suppose we wish to perform a transformation in at most time t.
    • This then immediately sets a scale-factor for the magnitude of the energy E that must be devoted to the system in carrying out the optimal Hamiltonian trajectory H(t); i.e.,E ≥ A/t.

M. Frank, "Physical Limits of Computing"

single qubit gate scenario
Single-Qubit Gate Scenario

Introduce basics of quantum computing first?

  • Let’s first look at 2-state (1-qubit) systems.
    • Later we’ll consider larger systems.
  • Let U be any unitary operator in U2.
    • I.e., any arbitrary 1-qubit quantum logic gate.
  • Let the vector set V consist of the “sphere” of all unit vectors in the Hilbert space ℋ2.
    • Given this scenario, the minimum effort to do any U is always 0 (just let v be an eigenvector of U), and is therefore uninteresting.
      • Instead we’ll consider the maximum effort.
  • What about our space  of available action operators?
    • Suppose for now, for simplicity, that all time-dependent Hermitian operators on ℋ2 are available as Hamiltonians.
      • Really we only need the time-independent ones, however.
    • Thus,  consists of all (constant) Hermitian operators.

M. Frank, "Physical Limits of Computing"

the bloch sphere
The Bloch Sphere

(Image courtesy Wikipedia)

  • Gives a convenient way to visualize the projective (phase-free) 2D Hilbert space as a sphere in ordinary 3D space.
    • An angle of θ in Hilbert space corresponds to an angle of 2θon the Bloch sphere.
  • Also reveals how real spin orientations of quantum particles in 3D space correspond to superpositions of |↑ (up) and |↓ (down) spins.
    • Relative magnitude of |↑ and |↓ “latitude” on sphere
    • Relative phase of |↑ and |↓ “longitude” on sphere

M. Frank, "Physical Limits of Computing"

analysis of maximum effort
Analysis of Maximum Effort
  • The worst-case difficulty to do U (in this scenario) arises from considering a “geodesic” trajectory in U2.
    • All the worst-case state vectors just follow the “most direct” path along the unit sphere in Hilbert space to get to their destinations.
      • Other vectors “go along for the ride” on the necessary rotation.
  • The optimal unitary trajectory U(t0,t) then amounts to a continuous rotation of the Bloch sphere around a certain axis in 3-space…
    • where the poles of the rotation axis are the eigenvectors of U.
    • Also, there’s a simultaneous (commuting) global phase-rotation.
  • If we also adopt the convention that the ground-state energy of H is defined to be 0,
    • Then the global phase-rotation factor goes away,
  • And we are left with a total effort A that turns out to be exactly equal to θ,
    • Where θ[0,π] is simply the (minimum) required angle of Bloch-sphere rotation to implement the given U.

M. Frank, "Physical Limits of Computing"

some special cases
Some Special Cases
  • Pauli operators X,Y,Z (including X=NOT), as well as the Hadamard gate:
    • Bloch sphere rotation angle = π (rads)
    • Worst-case difficulty: h/2
  • Square-root of NOT, also phase gate (square root of Z):
    • Rotation angle π/2, difficulty = h/4.
  • “π/8” gate (square root of phase gate):
    • Rotation angle π/4, difficulty = h/8.

M. Frank, "Physical Limits of Computing"

fidelity and infidelity
Fidelity and Infidelity
  • The fidelity (“fraction of similarity”) between pure states u,v is defined as F(u,v) = |u|v|.
    • So, F2 is the probability of conflating the two.
  • Define the infidelity between u,v as:
  • Thus, I2 = 1 − F2 is the probability that if state u is measured in a basis that includes v as a basis vector, it will project to a basis state other than v.
    • Infidelity is thus a distance metric between states…

(wv)

Any u lies atsome angle relative to valong a planein Hilbert space

defined by 0, v and somevector w that’sorthogonal to v.

w

u = Fv+Iw

I(u,v) = sin()

0

v

F(u,v) = cos()

M. Frank, "Physical Limits of Computing"

difficulty of achieving infidelity
Difficulty of Achieving Infidelity
  • Guess what, a Bloch-sphere rotation by an angle of θ gives a maximum (over V) infidelity of I+(θ) = sin(θ/2).
    • Meanwhile, the minimum fidelity is cos(θ/2)…
      • You’ll notice that F2+I2=1, as probabilities should.
  • Therefore, achieving an infidelity of I requires performing a U whose worst-case difficulty is at least A = 2·arcsin(I).
    • However, the specific initial states that actually achieve this infidelity under the optimal rotation are Bloch-sphere “equator” states
      • These are equal superpositions of high and low energy eigenstates;
    • They perform a quantum-average amount of computational work that is only half of the maximum effort.
  • Thus, the actual difficulty D required for a specific state to achieve infidelity of Iis only half of the worst-case difficulty of U, or D = A/2 = ·arcsin(I).
    • And so, a specific state that exerts an amount of computational effort F ≤ π/2 can achieve an infidelity of at most I = sin(F/), while maintaining a fidelity of at least F=cos(F/)…
      • a nice simple relation… Especially if we use units where =1…

M. Frank, "Physical Limits of Computing"

multi qubit gates
Multi-Qubit Gates
  • Some multi-qubit gates are easy to analyze…
    • E.g., “controlled-U” gates that perform a unitary U on one qubit only when all of the other qubits are “1”
  • If the space of Hamiltonians is truly totally unconstrained, then (it seems) the effort of these will match that of the corresponding 1-bit gates.
    • However, in reality we don’t have such fine-tailored Hamiltonians readily available.
  • A more thorough analysis would analyze the effort in terms of a Hamiltonian that’s expressible as a sum of realistically-available, 1- and 2-qubit controllable interaction terms.
    • We haven’t really thoroughly tried to do this yet…

M. Frank, "Physical Limits of Computing"

conclusion
Conclusion
  • We can define a clear and stable measure of the “length” of any continuous state trajectory in Hilbert space. (Call it “computational effort.”)
    • It’s simply given by the action of the Hamiltonian.
      • It has a nice geometric interpretation in the complex plane.
  • From this, we can accordingly define the “size” (or “difficulty”) of any unitary transformation.
    • As the worst-case (or average-case) path length, minimized over the available Hamiltonians.
  • We can begin to quantify the difficulty of various quantum gates of interest…
    • From this, we can compute lower bounds on the time to implement them for states of given energy.

M. Frank, "Physical Limits of Computing"

improvements to the computational interpretation of energy

Improvements to the Computational Interpretation of Energy

Operation Angles, RMS Action, Relationships to Energy Uncertainty

motivation for this section
Motivation for this Section
  • The previous section taught us that the Hamiltonian action taken by a quantum system can be identified with the amount of computational “effort” exerted by that system.
    • This can be thought of as its “potential” computational work
  • However, as in daily life, our “effort” quantity is something that can easily be wasted…
    • Exerting effort may not accomplish any “actual” computational work.
      • Example: Energy eigenstates merely rotate their phase angles; they don’t achieve any amount of infidelity.
        • They never change to a distinguishable state.
          • They never compute a “result” that’s measurably different from their “input”
    • Thus our “effort” quantity may overestimate the actual “usefulness” of a given physical computation.
      • And, we’ll see shortly that it can sometimes even underestimate it!
  • Can we define effective computational work in some way that avoids including the “useless work” that is associated with such wasted effort?
    • That is the goal of the following slides.

M. Frank, "Physical Limits of Computing"

operation angle between two similar states
Operation Angle Between Two Similar States

v′

δv

  • Consider any two normalized state vectors v, v′ that are very similar to each other
    • I.e., they differ by a small amount δv = v′−v → 0
      • E.g. they could be “adjacent” points along some continuous state trajectory
  • Let’s define the operation angle between v and v′, written θ(v,v′), as:θ(v,v′):≡arcsin(Inf(v,v′)) = arccos(Fid(v,v′))
    • As θ→0, (sin θ) → θ, so as δv → 0, the operation angle approaches the infidelity, θ →Inf(v,v′).
  • Theorem: As δv → 0, we also haveθ → || Imv|δvv + i δv ||.
    • Thus, we can get away without taking sines or cosines!
  • Goal: Figure out what relationship the operation angle has to the Hamiltonian operator H.

v

M. Frank, "Physical Limits of Computing"

relationship between the action and the operation angle
Relationship Between the Action and the Operation Angle

Unit spherein n-d Hilbertspace (2n realdimensions)

  • Let the displacement δvbe generated by a short unitary U′ = eiHδt → 1+iHδt.
    • I.e., close to the identity.
  • Consider now the action of the operator δA=Hδt on v.
    • We know that δv = i·δA(v).
  • We can decompose δvinto ivδφ+iwδθ, where
    • δφis the average phase angle accumulation of v’s coefficients in the energy basis,
      • I.e. the increment in effort φ or Hamiltonian action.
    • δθ is the operation angle, or angle in a direction towards a state wthat’s orthogonal to v.
      • Based on this, we find that δθ = ∆H[v]δt.
  • We can also express δA(v)=H(v)δt as uδξ, where u is the unit vector H(v)/|H(v)| and δξ = |H(v)|δt =(H2[v])1/2 δt
    • Note that δξ is the increment in RMS Hamiltonian action.
      • Note that this is always greater than the increment in average Hamiltonian action δφ = A[v] = H[v]δt.
  • We also have the nice Pythagorean relationδφ2 + δθ2 = δξ2.
    • This results from the fact thatH[v]2 + ∆H[v]2 = H2[v].
  • Is ξ perhaps a better measure of “computational effort” than the φ that we defined earlier?
    • After all, it upper-bounds boththe amount of phase rotation and the infidelity accumulation.
      • Globally, it’s conserved like H, since its eigenstates are the same.

δv

ivδφ

iuδξ

iwδθ

w = ∆Hv/∆H[v]

M. Frank, "Physical Limits of Computing"

mean deviation variance and standard deviation operators
Mean, Deviation, Variance and Standard Deviation Operators
  • For any vector v, abbreviate its squared norm as v = v|v = |v|2.
  • For any operator A, we can define the mean value operator of A, written A, as the (nonlinear) operatorA :≡ λv.A[v]v = λv.v|A|vv
    • Note: Every vector is an eigenvector of A, with eigenvalue A[v]! (The average eigenvalue of A in the probability distribution over eigenvectors that is implied by state v.)
      • A is just an operator of multiplication by a scalar, but it’s a non-constant scalar that varies (nonlinearly) as a function of the operand v to which A is applied.
    • Note: Although A does not perform a linear transformation of a vector v that it’s applied to, the operator itself is still a linear function of the underlying operator A that it’s derived from.
      • Thus we can still add together different mean-value operators, and multiply them by scalars, etc.
  • We can then define the (nonlinear) deviation operator of A, written ∆A, as the operator∆A :≡ A − A.
    • Note: Although ∆A does not represent a linear transformation of the vector space (the expression ∆Av is not linear in v), ∆A itself does still functionally depend linearly on A.
  • Next, we define the variance operator of A, written ∆A2, as∆A2:≡λv.∆Avv = λv.|∆Av|2v
    • It’s the squared length of the deviation vector resulting from applying ∆A to v.
  • Finally, we define the standard deviation operator of A, written ∆A, as∆A :≡λv.(∆A2v)1/2v= λv.|∆Av|v
  • With these definitions, ∆Av is exactly the standard deviation of the eigenvalues of A according to the probability distribution over the eigenstates of A that’s encoded by v.

where ai= ith eigenvalue of A,

pi = |i|v|2 = probability of eigenstate i,a = A[v] = average eigenvalue of A.

M. Frank, "Physical Limits of Computing"

some useful properties of the mean deviation variance operators
Some Useful Properties of the Mean, Deviation & Variance Operators
  • For any operator A,
    • Let the notation A(v) denote |Av|, the scalar length of Av.
  • Directly from the previous slide’s definitions, we have that

∆A(v) = |∆Av|

      • The standard deviation of A for any vector v is the length of the vector ∆Avthat is generated by applying the deviation operator ∆A to v.

∆A2(v) = |∆Av|2

      • The variance of A for any vector v is the squared length of the vector ∆Av
  • For any Hermitian operator H, we have that

H2(v) = |Hv|2

      • The mean value operator of the square of H for any vector v is the square of the length of the vector Hvthat is generated by applying H to v.
        • Proof: (see notes)

H2 = ∆H2 + H2

      • The mean value operator of the square of H is the sum of the variance operator ofH and the square of the mean value operator of H.
        • Proof: (see notes)

M. Frank, "Physical Limits of Computing"

the operation angle can be greater than the action
The Operation Angle Can be Greater than the Action!
  • Now, a natural question to ask is,
    • Is the increment in operation angle δθ always upper-bounded by the increment of Hamiltonian action δφ?
      • This would be natural if the Hamiltonian action truly represents the computational effort or maximum computational work.
  • It turns out that this is false!
    • Example: In a two-state system with energy eigenvalues 0 and 1, let the probability p of the high-energy state be ε<<1.
      • Then H = ε, while it turns out ∆H ≈ ε1/2 > ε.
        • In this situation, we have δθ/δφ ≈ ε−1/2,
          • This ratio becomes arbitrarily large as ε becomes smaller!
  • Thus, in general (in the limit of states with small ε) the Hamiltonian action does not bound the instantaneous rate of “infidelitization” (motion towards orthogonal states) at all!
    • However, note that in this example, with a time-independent Hamiltonian, the total infidelity never gets to 100%.
      • The state merely orbits around the low-energy pole of the Hilbert sphere.
  • What if we consider time-dependent Hamiltonians?

M. Frank, "Physical Limits of Computing"

the action of a time dependent hamiltonian doesn t limit the number of orthogonal transitions
The Action of a Time-Dependent Hamiltonian Doesn’t Limit the Number of Orthogonal Transitions!

w

  • We saw that a typical state where ∆h >> h makes a small orbit around the ground state.
    • Thus never achieves orthogonality with its original state.
  • But now, consider “linearizing” these small orbits by rotating for small angles about a series of different poles, as shown as right.
    • Such a trajectory could be implemented by rapidly switching through a sequence of different Hamiltonians.
  • With proper selection of poles, we can see that overall, the trajectory can proceed in almost a straight line, from the initial state v towards any desired state wv.
    • Moreover, the rate at which infidelity is accumulated along this path is dθ/dt = ∆H, the rate at which Hamiltonian action is accumulated is dφ/dt = H, and ∆H > H everywhere along the path!
  • Integrating along the path, we see that the total effort φ does not actually limit the total operation angle θ!
    • Hamiltonian action does not limit the time to reach an orthogonal state for time-dependent Hamiltonians!

v

[Note: Need to work example through inmore detail to prove itworks more rigorously.]

M. Frank, "Physical Limits of Computing"

virtual energy virtual effort
Virtual Energy, Virtual Effort
  • We can declare the “virtual” total energy of any state ψ as being its RMS energy, R(v) = |Hv| = (H2[v])1/2.
    • This gives the instantaneous rate at which RMS action ξ = ∫δξ = ∫R(v)δt gets accumulated as the state moves along its trajectory v(t).
      • So, define the virtual computational effort exerted to be the RMS action ξ!
        • And the total rate of exertion of virtual computational effort to be R.
  • Now, the virtual energy R is vectored (the vector is Hv) along the direction “towards” the state vector u = Hv/R.
    • Projecting this energy onto v’s own direction, v|R|u = v|H|v = H[v] gives the Hamiltonian energy or rate of phase rotation of v.
      • Rate of phasal computation, rate of accumulation of phasal action.
    • Projecting onto the orthogonal direction w = ∆Hv/∆H[v] gives w|R|u=v|∆H†H|v/∆H[v] = ∆H[v] (check math), the rate of infidelitization of v.
      • Rate of effective (i.e., infidelitizing) computation, rate of accumulation of effective action.
        • Computation that actually moves probability mass towards neighboring states!
          • The variance of H in state v is the rate at which probability mass is flowing away from v.

M. Frank, "Physical Limits of Computing"

virtual effort phasal effort effective work
Virtual Effort, Phasal Effort, Effective Work
  • Along any continuous state trajectory,
    • The virtual effort is:
    • The phasal effort is:
    • The effective workis:
  • They are related by:

e.w.

v.e.

p.e.

M. Frank, "Physical Limits of Computing"

next dealing with locality
Next: Dealing with Locality
  • In real physical systems, arbitrary Hamiltonians that would take us directly between any two states are not available!
    • Instead, physical Hamiltonians are constrained to be local.
      • Composed by summing up terms for the interactions between neighboring subsystems.
        • This is due to fact that field-theory Lagrangians are given by integrating a Lagrangian density function over space
          • Or, integrating total Lagrangian action over spacetime
  • We would like to see how and whether our concepts such as
    • “effective computational work” (accumulated operation angle) θ
    • “amount of phasal computation” (accumulated Hamiltonian action) φ
    • “total computational effort” (accumulated RMS action) ξ

can be applied to these more restricted kinds of situations.

    • Eventually, we’d also like to understand how the computational interpretation of energy in local physics relates to relativistic effects such as time dilation, mass expansion, etc.

M. Frank, "Physical Limits of Computing"

some terminology we ll need
Some terminology we’ll need…
  • A transformation is any unitary U which can applied to the state of a quantum system.
    • A transformation is effective with respect to a given basis B if the basis states aren’t all eigenstates of U.
      • Thus, U rotates at least some of the states in B to other states that are not identical.
  • A local operation is a transformation applied only to a spatially compact, closed subsystem.
    • Keep in mind, any region could be considered closed under sufficiently short time intervals.
      • Thus the overall U of any local theory can be approached arbitrarily closely by compositions of local operations only.
  • A transformation trajectory is any decomposition of a larger transformation into a sequence of local operations.
    • Approximate, but exact in the limit as time-per-op → 0.

M. Frank, "Physical Limits of Computing"

example discrete schr dinger s equation
Example: Discrete Schrödinger’s Equation

Location graph

  • We can discretize space to an (undirected) graph of “neighboring” locations….
  • Then, there is a discrete set of distinguishable states or configurations(in the position basis) consisting of a function from these locations to the number of quanta of each distinct type at that location:
    • For Fermions such as electrons:
      • For each distinct polarization state: 0 or 1 at each loc.
    • For Bosons such as photons:
      • For each distinct polarization state: 0 or more at each loc.
  • We can then derive an undirected graph of neighboring configurations.
    • Each link corresponds to the transfer of 1 quanta between two neighboring locations,
    • Or (optionally) the creation/annihilation of 1 quanta at a location.
  • The system’s Hamiltonian then takes the following form:
    • For each configuration c, there is a term Hc
      • Gives the “potential” energy (incl. total particle rest masses?) of that configuration.
    • For each link between two neighboring configurations c,d corresponding to the motion of quanta with rest mass m, include a term Hcd:
      • Comes from the kinetic energy term in Schrödinger’s eq.
    • For each link corresponding to particle creation/annihilation, include a term based on fundamental coupling constants
      • Corresponding to strengths of fundamental forces.

0100

1000

0001

0010

1-fermionconfigurationgraph

c

d

M. Frank, "Physical Limits of Computing"

operation angles of short unitaries
Operation Angles of Short Unitaries
  • For any short, local operation U =eiHδt → 1+iHδt,
    • Where “short” here means close to the identity matrix,
  • Define U’s operation angleθU as the following:θU = maxv arcsin(Inf(v,Uv)) = maxv arccos(Fid(v,Uv))
    • Where v ranges over all normalized (|v|=1) states of the local subsystem in question.
      • Or over a subset V of these that are considered “accessible.”
  • In other words, consider each possible “input” state v.
    • After transformation by U, it rotates to Uv.
    • The inner product with the original v is v†Uv.
    • The magnitude of the inner product (fidelity) is given by the cosine of the angle between the original (v) and final (Uv) vectors.
      • Just as with dot products between real-valued vectors.
    • We focus on the angle required to yield that fidelity.
  • Maximizing over the possible vs gives us a definition of the operation angle of Uthat is independent of the actual state v.
    • The minimum would not be useful because it is always 0.
      • Since the eigenstates of Udo not change in magnitude.

M. Frank, "Physical Limits of Computing"

motivating this definition
Motivating this Definition
  • Notice that our definition of the operation angle of a transformation does not depend on the actual state of the system.
    • Only on the maximum angle of rotation over all states of the system.
  • Later, we are going to identify the number of operations or amount of physical computation taking place in a system with our definition of the operation angle.
    • Why is this approach justified?
      • Why doesn’t the actual state of the system matter?
  • Consider an ordinary logic gate, such as AND.
    • If the inputs are 0,1 and the output is 1, the output is changed, to 0.
      • But, if the inputs then change to 0,0, and the output is already 0, the output isn’t changed at all by the gate!
    • Yet, we say that the gate has still done some work, performed an op,
      • It has determined that the output should not change!
  • Analogously, even when a given transformation U does not rotate the actual input state, the system still has done the work of determining that the actual input isn’t one of the ones that should have rotated by the maximum amount. (Thanks to S. Lloyd for this thought.)
    • Therefore, it is reasonable to quantify the amount of computational work performed by the maximal amount of rotation, over all possible input states.

M. Frank, "Physical Limits of Computing"

operation angle of a trajectory
Operation Angle of a Trajectory
  • Now, for any transformation trajectory T = (Ui),
    • (A sequence of small, local unitaries Ui),
    • We can define its operation angle θT as simply the sum, over the Ui’s, of their operation angles:

M. Frank, "Physical Limits of Computing"

operation angle of a transformation
Operation Angle of a Transformation
  • And now, for any transformation U (no matter how large), of any extended quantum system,
    • we can define the operation angle of U as the minimum, over all decompositions T = (Ui) of U, of the operation angle of T:

M. Frank, "Physical Limits of Computing"

primitive orthogonalizing operations
Primitive Orthogonalizing Operations
  • Define a primitive orthogonalizing operation or pop or (π/2)-op to be any transformation with an operation angle ofop = (π/2) rad = 90°.
  • If a transformation U with θU = op is applied to a local system (e.g. qubit),
    • then some initial vector v of that system must get transformed by U to a vector u = Uv that is orthogonal to v.
      • Since only if uv (u†v = 0) will cos−1(|u†v|) = 90°.

M. Frank, "Physical Limits of Computing"

pop example
Pop Example
  • Consider a 2-state quantum system w. energy eigenstates |0 and |E, with energies 0 & E respectively.
  • Consider now an initial state v = |0+|E.
  • Since the phase of |E rotates at frequency f=E/h,
    • it will make a complete cycle in the complex plane in h/E time.
      • Thus will make a half-cycle rotation to −|E in time t = h/2E.
  • Meanwhile, state |0 does not phase-rotate at all.
    • So, the new state u at time t is u = |0−|E.
    • Note this is orthogonal to the old state v.
  • So the time evolution U = eiHt/ has an operation angle θU of ≥1 pop.
    • It turns out that, in fact, it isexactly 1 pop.
  • Note both these states have average energy of:
  • So we have a system with average energy performing pops at the rate:
    • Can we do better than this?

v = |0+|E

|E

−|0

|0

u = |0−|E

−|E

M. Frank, "Physical Limits of Computing"

margolus levitin theorem
Margolus-Levitin Theorem
  • Theorem: No quantum system with average energy E (relative to its lowest-energy eigenstate |0) can transition between orthogonal states at a rate faster than f = 4E/h.
    • See the original paper (in the Limits reading list) for the proof.
  • Now, notice that whenever a system has an available energy eigenstate at (or very close to) energy 2E,
    • (and this will almost always be the case in complex systems,
      • which have near-continuous bands of closely-spaced energy levels)
    • The state |0+|2E is a possible initial state of the system,
      • And this state would, in fact, make transitions at the rate f.
  • Thus, by our definition of pops, virtually any physical system with expected energy E really is, in fact, performing pops at the rate 4E/h.
    • Thus, energy Eis simply a rate of pops of 4E/h.

M. Frank, "Physical Limits of Computing"

cycle length n orthogonalizing operations
Cycle-length N orthogonalizing operations
  • One flaw with thinking of a pop such as |0+|E → |0−|E as a meaningful computational operation is that it immediately undoes itself!
    • The state transitions back to |0+|E, also in time t.
      • It is therefore not a computationally useful operation.
  • A useful computation should be able to transition through a very long sequence (vi) of distinct states before repeating.
    • Margolus and Levitin also show that for a cycle of N states, the frequency of orthogonal transitions is 2(N−1)/N times slower than for a cycle with only N=2 states.
    • Thus, for indefinitely long computations (N→∞), the rate of transitions between orthogonal states in such long chains approaches 2E/h for systems of average energy E.
  • We define a cycle-length Northogonalizing operation or cycle-N-op or oN to be an operation angle of 2(N−1)op/N.
    • A chained orthogonalizing operation (chop) or normal op (nop, π-op, op) o=on is just a cycle-∞-op o∞ = 2op.
      • It represents a useful transition in an indefinitely-long computation.

M. Frank, "Physical Limits of Computing"

cycle n op example
Cycle-N-op Example
  • Consider n independent, noninteracting 2-state systems S0…Sn−1.
  • Let system i have energy eigenstates |0 and |Ei, where Ei = 2iE0.
  • Consider the initial state i |0+|Ei…
    • This is a tensor product state over all the subsystems.
    • Note that subsystem i will have (average) energy ½·2iE0 =2i−1E0.
  • The whole system will therefore have a total energy given by:
  • Subsystem Sn−1, whose |En component has energy 2n−1E0, will make orthogonal transitions at the rate 2nE0/h.
    • Note that each of these is also an orthogonal transition of the whole system.
  • The system as a whole implements a binary counter with n bits.
    • It thus makes N = 2n transitions before repeating.
  • Rate R = NE0/h, energy E = (N−1)E0/2 gives us:
    • E0 = 2E/(N−1), and R = 2NE/(N−1)/h, rather than the 4E/h rate of pops.
    • Thus, it’s slower by a factor of (4E/h)/[2NE/(N−1)/h] = 2(N−1)/N.

M. Frank, "Physical Limits of Computing"

what about smaller angles
What about smaller angles?
  • We have seen that the popop quantifies the minimum operation angle for orthogonalizing transformations,
    • While the chopo = 2op is the minimum operation angle for an orthogonalizing operation in an unboundedly-long sequence of such.
  • Is there any sense in which operations with much smaller operation angles θ≪op could still represent “useful” computational ops?
    • No, not considered individually…
  • Consider, for any U with θU≪ 90°, and any normalized input vector v, & the magnitude m=|a| of the amplitude a = uv of the output vector u = Uv given v.
    • It must be that m = |v†Uv| ≈ 1, since if m were appreciably less than 1, then θU ≡ cos−1m would be a significant fraction of 90°.
  • Furthermore, for any vectors u,v, any subsequent unitary transforms will preserve the angle θ between u and v.
    • Since a unitary transform is just a remapping to a different orthonormal basis (like a change of coordinate system), it preserves the geometric relations between vectors.
  • Therefore, no subsequent operations can arrange for the states u,v to be significantly distinguishable (this is just Heisenberg again).
    • Thus, the original operation U makes no significant change to the state, in that if U were omitted, the result of any subsequent transformations would yield a state not significantly distinguishable from the case where U was included.

M. Frank, "Physical Limits of Computing"

why energy means rate of computation
Why Energy means Rate of Computation
  • For any unitary transform U (including the time-evolution of any quantum system), we will say that the number of (potentially useful physical computational) operations oU making up U is simply the numeric value of the operation angle θU, when expressed in units of normal operations o = 180°. That is, oU :≡ θU/o = θU/180°.
  • Similarly, we will say the potential amount of physical computation (orcomputational work) CU performed by the transform U is thedimensional quantity given by θU itself, with its units of operation angle (such as o) taken as also being units of computational work. Thus, CU = oUo.
    • So for example, CU = 10o is the amount of computational work performed by a transform U that is made up of 10 operations (operation angle of 1800°).
  • If we further declare the op o to be equal to h/2 (that is, we identify 180° of operation angle –1 useful op—with 180° of phase rotation),
    • then the results of preceding slides tell us that the rate of operations Ro :≡ oU/t = 2E/h = E/o.
    • Thus, the rate of physical computation Rc :≡ cU/t = oUo/t = E !
  • Thus, the energy E is the exact same physical quantity as the rate of physical computation Rc!!

M. Frank, "Physical Limits of Computing"

a physical computing example
A Physical Computing Example
  • Consider a subsystem consisting of 1 electron, with a maximum energy level located 1 V above the minimum level.
    • Maximum energy E = 1 eV = 1.602×10−19 J.
  • This subsystem can thus perform “useful” (non-oscillatory) physical ops at a maximum rate of Ro = E/o = 2E/h = 483.6 THz.
    • It’s “computing” at a rate of 483.6 “terachops” or 483.6 trillion chained orthogonalizing operations per second
  • This seems high, but it’s “only” ~150,000 times the frequencies of today’s fastest (3.2 GHz) Pentium 4’s…
    • Only ~26 years away if Moore’s Law continues!

M. Frank, "Physical Limits of Computing"

physical vs logical computation
Physical vs. Logical Computation
  • A chop or nop o also corresponds to a normal (and potentially-useful) bit-operation (nobop?).
    • Note a bit is the minimum-sized subsystem we can pick.
  • However, in a real computer, we are generally using a larger number of physical bits (and ops) to emulate a smaller number of logical bits/ops.
    • E.g., a minimum sized circuit node today has ~105 electron states in the conduction band between low and high voltage levels.
      • When we charge the node, we are changing the Fermionic occupancy numbers (0 vs. 1) of all of these electron states.
    • Each such change requires at least 1 nop, since a change in any occupancy number is distinguishable.
      • Thus, today we are using at least around 105 physical nops to emulate 1 logical bop.
  • One of our goals in developing new nanocomputing technologies is to make the encoding more efficient…
    • And carry out useful logical operations using fewer physical chops.

M. Frank, "Physical Limits of Computing"

some units of physical computational effort work
Some Units of Physical Computational Effort/Work
  • Based on prior discussions…

M. Frank, "Physical Limits of Computing"

types of energy action and types of computation
Types of Energy/Action and Types of Computation
  • Different types of energy (and their associated actions) give the rates & amounts of different types of computing:
    • RMS Energy: Virtual Computational Effort
    • Hamiltonian Energy: Phasal Computational Effort
    • Ham. Deviation: Effective Computational Work
    • Negative potential energy:

M. Frank, "Physical Limits of Computing"

kinetic vs potential energy
Kinetic energy rotates the phase angles of momentum eigenstates.

Momentum eigenstates are also eigenstates of the kinetic energy component of the Hamiltonian.

Kinetic energy is effective with respect to the position basis

Carries out changes of position,

Transfers probability mass (and energy) between neighboring position states.

and ineffective with respect to momentum.

does not transfer probability mass (and energy) between momentum states.

Kinetic energy is rate of computation “in” the position “subsystem” (basis)

Rate of state changes involving the spatial translation of particles (quanta).

Potential energy rotates the phase angles of position (configuration) eigenstates.

Position eigenstates are also eigenstates of the potential energy component of the Hamiltonian.

Potential energy is effective with respect to the momentum basis,

Carries out changes of momentum (virtual particle creation/annihilation),

Transfers probability mass (and energy) between neighboring momentum states.

and ineffective with respect to position.

Does not transfer probability mass (and energy) between position states.

Potential energy is rate of computation “in” the momentum “subsystem” (basis)

Rate of state changes involving creation/annihilation of particles (increase/decrease in number of quanta).

Kinetic vs. Potential Energy
  • Position and momentum bases are Fourier transforms (a unitary transform) of each other.
    • However, the laws of physics are not symmetric with respect to exchanges of position and momentum!
      • Momentum is related to the time-derivative of position (times m).
        • Derivatives and integrals are complementary to, but not symmetrical with each other.

M. Frank, "Physical Limits of Computing"

temperature is clock frequency
Temperature is “Clock Frequency”
  • Now that we know that energy is just rate of computing(rate of ops performed)…
    • Recall that (generalized) temperatureT = E/I is just energy per amount of total information content…
  • Therefore, temperature is nothing but the rate of computing per unit of information.
    • Measured, for example, in o/b·s (ops/bit/sec).
  • In other words, it is the physical equivalent of a computer’s clock frequency or rate of completeparallel update steps.
  • For example, consider the meaning of “room temperature”:

300 K = 0.0259 eV/kB = 12.5o·THz/n = 8.67 To/b·s

    • That is, room temperature is nothing other than a rate of parallel physical computing of about 9 trillion ops per bit per second (9 THz clock freq.).
      • Note this is only ~3,000 times faster than today (17 years away).
    • The digital subsystem of any hypothetical CPU that is manipulating digital bits at this average frequency must, by definition, be at this temperature!
      • Superconducting electronics at cryogenic temperatures is doomed to be slower.
  • Note that today’s 3.2 GHz clock frequencies require only ≥0.1 K in the processor’s digital subsystem.

M. Frank, "Physical Limits of Computing"

internal vs interaction temperatures
Internal vs. Interaction Temperatures
  • Two subsystems that are well-isolated from interactions with each other may have different internal temperatures.
    • Internal temperature is the step rate associated with the accessible internal energy.
      • The updating of the internal state of each subsystem.
  • But also, between any two subsystems, we may also define an interaction temperature.
    • This is the step rate associated with the energy of interaction (particle exchange) between the subsystems, per bit of information in the interaction subsystem.
      • The states of the interaction subsystem represent different distributions of quanta between the original two subsystems.
        • The interaction energy updates this part of the overall system state.

M. Frank, "Physical Limits of Computing"

interaction subsystem example
Interaction Subsystem Example
  • Consider a simple system with two spatially-distinct subsystems “left” and “right”, each with 2 locations.
  • Suppose the wholesystem contains 2Bosonic quanta.

Graph of Configurations

Leftsubsystem

2000

1100

0200

Graph of Locations

1001

Rightsubsystem

1010

0101

0110

Interactionsubsystem

Leftsubsystem

Rightsubsystem

0011

0020

0002

M. Frank, "Physical Limits of Computing"

information in a subsystem
Information in a Subsystem
  • Given any subsystem defined solely by a partition of the configuration space,
    • We can precisely define the total information content and reduced entropy content of that subsystem.
  • Let there be N total states in the configuration space, n subsets in the partition, and let Ni, be the number in subset i.
    • The total information content I of the subsystem is given by the usual expression I = log n.
    • The expected entropy contentSex of the subsystem is the sum:
      • Note this is the entropy content of the subsystem when the whole system is at its maximum entropy.
        • It will be less than I if the Ni’s are not all equal.
  • For any given mixed state of the system, the reduced entropy content Sred of any subsystem is given by the von Neumann entropy of the reduced density matrix for that subsystem.
    • Recall that reduced entropy is subadditive under subsystem composition, for mutually entangled subsystems.

M. Frank, "Physical Limits of Computing"

energy in a subsystem
Energy in a Subsystem
  • We can define the total kinetic energy in any subsystem (partition of the configuration space) as the sum of the kinetic energies on all links that cross the partition boundaries.
    • Consider a given pure state, with its implied wavefunction Ψs over configuration states s.
    • For any link ={a,b} between two neighboring configurations a, b, the kinetic energy associated with that link is given by:
  • With this definition, note that the kinetic energies of independent subsystems are additive.
    • We can easily extend the definition to arbitrary mixed states by taking the expectation value of the link’s energy.

M. Frank, "Physical Limits of Computing"

temperature of a subsystem
Temperature of a Subsystem
  • We can now precisely define the instantaneous generalized and thermodynamic temperatures of any subsystem, given any mixed state ρ.
    • Note that these definitions do not require the system or subsystem to be in an equilibrium state!
  • Let the average energy in the subsystem be E, let its total information capacity be I, its expected entropy Sex [define this], and its reduced entropy Sred.
    • Then its generalized temperature is Tgen = E/I,
    • Its heat content is H = E(Sred/I),
    • its expected average thermodynamic temperature is Teq = ESex/I2,
    • and its average thermodynamic temperature is T = H/I = ESred/I2
  • The latter definition is what is traditionally thought of as “temperature” for subsystems considered in isolation.

Not sureif thesedefinitionsare quiteright; stillworking on them.

M. Frank, "Physical Limits of Computing"

corp device model
CORP Device Model

Device

Coding

Subsystem

Non-coding

Subsystem

Logical

Subsystem

Redundancy

Subsystem

Structural

Subsystem

Thermal

Subsystem

Computing with Optimal, Realistic Physics

  • The physical degrees of freedom (sub-state-spaces) of a device are broken down into coding and non-coding parts.
    • These are then further subdivided as shown below.
  • Devices are characterized by geometry, delay, & operating & interaction temperatures within & between devices and their subsystems and subcomponents.

M. Frank, "Physical Limits of Computing"

relativistic spacetime mass and momentum

Relativistic Spacetime, Mass, and Momentum

A Computational Interpretation

time as an amount of computation
Time as an Amount of Computation
  • We saw earlier that (average) energy is the amount of computation CU performed by a given unitary transform U, divided by the time ttaken to perform the transformation: E = CU/t.
    • We can turn this around, and note that the time that passes is the amount of computation CUdivided by the energy E: In symbols, t = CU/E.
  • So, if we had a natural unit of energy E, we could say that the elapsed time for a given transformation Uis just another name for the amount CU(E) of computation that U would do on any system defined to have energy E, which would be our “clock.”
    • One natural choice is to take E = E :≡ EP = (c5/G)1/2, the Planck energy.
      • This or a small multiple may be the maximum energy of a single particle.
        • In theories of quantum gravity.
      • E ≈ 2 GJ ≈ explosive output of ½ ton of TNT. Call E 1 “blast.”
    • Then, 1 natural op o gives us a natural “computational” time unit of to = o/E ≈ 1.7×1043 s, which we will call the tick.
      • Since o = h/2 = π, to = π/E = πtP where tP = (c5/G)1/2 is the Planck time.
  • Hypothesis: The tick is the absolute minimum possible time to perform a natural op on any physically possible bit-system.
    • Follows from the assumption that no single particle can have energy greater than E (in any reference frame). (See, e.g., gr-qc/0207085.)

M. Frank, "Physical Limits of Computing"

distance as number of motional ops
Distance as Number of Motional Ops
  • A system of definite energy (e.g., E) performs a definite rate R=E=C/t of computation per unit time.
    • If it has a definite velocity (e.g., c), then its position x = ct and so it also performs a definite amount of computation per unit distance traversed, C/x = C/ct = E/c = pP = 6.525 kg m/s ≈ 2×1034o/m
      • For a general system, some of its computational ops may update its internal state (“functional” ops),
        • others its position (“motional” ops).
      • We will see, for a system with zero rest mass (e.g., photon), all its ops are motional—concerned with updating its position.
  • Thus, we can also identify the distancex between two points in space with the amount of computationC that would be performed by a photon of Planck energy E passing between those points.
    • This is valid since x = C(c/E), a constant times C.

M. Frank, "Physical Limits of Computing"

functional vs motional ops
Functional vs. Motional Ops
  • Consider a system undergoing a certain amount Cfunc of “functional” (internal) evolution in its rest frame, without moving.
    • And also consider a system undergoing an amount Cmot of “motional” transformation that just shifts it in space, without evolving it internally.
  • Arguably, these transformations rotate the state vector in mutually orthogonal directions in Hilbert space.
    • Thus, concurrent  small operation-angle incrementsθfunc and θmot should make a combined operation angleθtot á la Pythagoras: θtot ≈ (θfunc2+θmot2)1/2.
      • E.g. 1° N + 1° E → ~1.414° NE at equator
  • Thus, we expect that summing suchincrements along a “straight-line”trajectory gives Ctot = (Cfunc2+Cmot2)1/2.

Distinctinternalstate

θtot

Distinctpositionalstate

θfunc

θmot

Initialstate

M. Frank, "Physical Limits of Computing"

time dilation from e p invariance
Time Dilation from EP Invariance?
  • Interestingly, if we just assume that a particle of our reference energy E=EP has this same energy in all frames,
    • As in, e.g., Magueijo & Smolin ’02, gr-qc/0207085,
  • Then, for this particle, since we have rest energyE0 = Cfunc/t′ = E = E = Ctot/t, we must conclude that t′ = t(Cfunc/Ctot), but since Cfunc = (Ctot2−Cmot2)1/2, we have that:
  • But, since we defined time t as Ctot for all energy-E particles, if we analogously define x as Cmot for all energy-E particles (regardless of velocity), we have:

Note this is thecorrect formula forrelativistic timedilation!

M. Frank, "Physical Limits of Computing"

what is momentum
What is momentum?
  • To try to complete our picture, let us first guess that momentump for any object in a given direction is the rate of motional ops,p = Rmot = Cmot/t in that direction… And get:
  • Oops! Thecorrect relativisticrelation should beE2 = p2 + m02!
    • What went wrong?
    • How do we fix it?
      • Note we can’t fix it by saying that m0 = Cint/t, because thatwouldn’t be velocity-invariant.

Wrong!

M. Frank, "Physical Limits of Computing"

computation components
Computation Components
  • Look, it can’t really be right to say that Ctot is Ctot = (Cmot2 + Cfunc2)1/2, because intuitively, amounts of computation (like energy) ought to combine additively…
    • And recall, the Cmot & Cint amounts that we started with were the amounts of motional and internal computation if the system only did one or the other, not if doing both at once.
  • So, instead let us just say that Ctot = Cmot + Cfunc, while keeping the definitions Ctot = Et and Cfunc = E0t′.
    • In other words, we’ll say the true amount of motional computation is what is left over when you subtract the functional from the total, Cmot = Ctot − Cint.

M. Frank, "Physical Limits of Computing"

functional vs motional computation
Functional vs. Motional Computation
  • The amount of functional computation Cfunc in a system during a transformation is the system’s total amount of computation, as measured in that system’s rest frame.
    • Differs from total computation Ctot for moving systems (at least given particle energies <<E ), since some computation is carrying out translation of the system, not updating its state.
  • In time t, a moving system’s internal (proper) time t′ = γt.
    • In its own frame, the moving system thinks it has energy E0 = m0c2, where m0 is its rest mass. Recall that m0 = γm.
  • Thus, the amount of functional computationCfunc isCfunc = E0t′ = γmc2·γt = γ2(mc2/t) = γ2E/t = γ2Ctot.
    • So, γ2is the fraction of computation that is functional.
  • Let the amount of motional computationCmot be the rest:Cmot = Ctot − Cfunc = Ctot(1−γ2) = β2Ctot.
    • Thus, β2 is the fraction of computation that is motional.

M. Frank, "Physical Limits of Computing"

relativistic momentum
Relativistic Momentum
  • Now, we discover that we can define momentum as p = Cmot/x.
    • I.e., the motional ops per unit distance traversed
      • in a given direction; more generallypi = (Cmot)i/xi; where the motional computation is also a vector, given by (Cmot)i = βi2Ctot.
  • When we plug this version into the relativistic energy-momentum relation, everything checks out!
    •  Our computational interpretation of energy & momentum is fully consistent with special relativity!

M. Frank, "Physical Limits of Computing"

kinetic energy vs motional energy
Kinetic Energy vs. Motional Energy
  • Kinetic energy is standardly defined as the difference between an object’s total energy when moving and its rest mass-energy; Ekin = Etot− E0.
    • We’ll stick with this definition.
  • We can also define motional and functional energies based on the corresponding types of computation:
    • Motional energy Emot :≡ Cmot/t.
    • Functional energy Efunc :≡ Cfunc/t.
    • Thus, Emot = Etot − Efunc.
  • Now, are motional and kinetic energies the same?
    • And likewise, are internal and rest-mass energies the same?
  • No, because E0 = Cmot/t′, notCmot/t.
    • So, Efunc = γE0 (this decrease accounts for time dilation),
    • And Emot = Ekin + (1−γ)E0.
      • Motional energy is kinetic energy, plus the missing part of rest energy!

M. Frank, "Physical Limits of Computing"

motional energy and momentum
Motional Energy and Momentum
  • The previous slide implies that imparting kinetic energy to an object not only adds to its motional energy, but also converts some of its existing internal energy to a motional form. How much?
    • Exercise: [1 point] Show that Emot/Ekin = 1 + γ.
  • Thus at non-relativistic speeds (that is, where β→0 and thus γ→1), Emot ≈ 2 Ekin.
    • Slow objects make a ~100% “matching contribution” for investments of kinetic energy you put into them!
  • Thus, Emot ≈ 2(½mv2) = mv2 = pv.
    • Motional energy is just momentum times velocity!
    • In fact, this relation is exact even relativistically, since Emot = Cmot/t = Cmot/(x/v) = (Cmot/x)v = pv.

M. Frank, "Physical Limits of Computing"

connection with hamilton s principle
Connection with Hamilton’s Principle
  • Recall that the Langrangian L = pv − H.
  • Recall also the Hamiltonian is total energy Etot.
    • We are free to include rest-mass energy in it if we want.
  • Thus, the Lagrangian is nothing other than the negative of the internal energy! L = Emot − Etot = −Efunc!
  • And so, the action of the Lagrangian is just the negative of the amount of functional computation! AL = ∫L dt = −∫Efunc dt = −Cfunc.
  • Therefore, to extremize the (L-) action is to extremize the amount of functional computation, i.e., the proper time t′.
    • Extreme L-action  extreme amt. of functional computation  extreme proper time  extreme operation angle of functional computation  extreme phase angle  phase angle stationary under small path variations  nearby paths accumulate similar amplitudes  amplitudes add constructively  total amplitude has large magnitude  path is (locally) the most probable one!

M. Frank, "Physical Limits of Computing"

relations between some important types of energy
Relations Between Some Important Types of Energy

MotionalenergyM = E−F = pv (γ−1−γ)R =β2E(≥0, >K)

Kinetic energy K = E − R = (γ−1−1)R (≥0, ≤M)

HamiltonianH = E+P = E−N (≥0, constant)

Total“real”objectenergyE = R/γ= mc2(≥0, ≥R)

LagrangianL = M−H = N−F(extremized,usu. minimized)

Rest energy R = m0c2(≥0,constant)

Negativeof potentialenergy(often >0)

Potentialenergy(often <0)

Functional energyF = γR(≥0, ≤R)

(γ=1/2 in this example)

Zero energy (vacuum reference level)

M. Frank, "Physical Limits of Computing"

geometric relationships between various energies
Geometric Relationships Between Various Energies

ISTHISHELPFUL?

  • “Boost” a rest mass R to velocity β by rotating its energy through an angle of θ = arcsin(β).
    • Functional energy F is then the projection of the rest mass-energy onto the original axis.
    • Total object energy E is the energy that, when projected onto the boosted axis (moving frame), gives the rest energy.
    • Kinetic energy is the non-rest energy, K = E−R.
    • Motional energy is the non-functional energy, M = E−F.

Kinetic energyK = (1/γ−1)R

Total object energy E

Rest mass-energy R

Velocity:

β = sin(θ)Gamma:γ = cos(θ)

Boost

Angle θ

Functional energy F=γR

Motional energy M=(1/γ−γ)R

M. Frank, "Physical Limits of Computing"

discussion of energy geometry
Discussion of Energy Geometry
  • General principle: The energy of a state has not just magnitude but also direction,
    • in the sense that the energy is associated with the tangent vector to the state’s trajectory in Hilbert space.
  • A relativistic boost to a different inertial reference frame corresponds to a unitary rotation U of the Hilbert space.
    • For a wavefunction over spacetime, the rotated wavefunction ψ′ = Uψ is defined byψ′(x,t) = ψ((x+βt)/γ, (t+βx)/γ).
      • For infinitesimal boosts δβ, this approaches ψ′(x,t) = ψ(x+δβt, t+δβx).
        • The isophases of the eigenstates of δβ are hyperbolic curves that ly cross the x,t axes and become asymptotically || to the light cone.
    • This rotation is characterized by an angle θin the range [−π/2, +π/2].
      • Is this related to the operation angle of a Uthat actually accelerates a stationary particle to velocity β? Unclear!
  • Since the boost eigenstates aren’t energy eigenstates, a boost doesn’t conserve energy
    • which is why mass expansion occurs
  • A boost can inflate the energy or deflate it, if the some of the energy is motional energy in the opposite direction from the boost.

M. Frank, "Physical Limits of Computing"

important points to remember
Important Points to Remember
  • Energy is simply the rate of physical computing.
    • Heat = Rate of computing in bits that are entropy.
    • Temperature = Rate of computing per bit of info.
      • It is physical “clock speed”; room temp. ≈ 9 THz.
    • Rest mass-energy = Computing rate in rest frame.
    • Momentum = Motional computation per unit distance.
    • Motional energy = Motional computing per unit time.
    • Kinetic energy = Total energy minus rest mass-energy.
    • Internal energy = Internal operation rate per unit time.
      • Time dilation factor times rest mass-energy.
  • In the next module, we will see how these & other facts impact the fundamental limits of computing…

M. Frank, "Physical Limits of Computing"

general relativity gravity from spacetime geometry

General Relativity: Gravity from Spacetime Geometry

And Its Role in the Physics of Computation

gr primer
GR Primer
  • In General Relativity (GR), Einstein’s theory of gravity, gravitational fields are described as a curvature that is present in the underlying spacetime geometry.
    • “Matter tells spacetime how to curve; spacetime tells matter how to move.”
  • Einstein Field Equations:

Where:

Rμν= Ricci tensor

R = Ricci scalar

gμν = Spacetime metric

G = Newton’s constantc = speed of light

Tμν = Stress-energy tensor

Local spacetimegeometry

Local energy,momentum,pressure &stress

M. Frank, "Physical Limits of Computing"

black holes
Black Holes
  • Black holes are objects with such strong gravity that not even light can escape.
    • Defined by “event horizon” surface
      • Point of no return
  • Their “radius” is proportional to their mass:

M. Frank, "Physical Limits of Computing"

relations between gr and the physics of computation
Relations Between GR and the Physics of Computation
  • Black holes have entropy:
    • Work in the ’70s by Bekenstein and Hawking
  • The “holographic principle” (t’Hooft, Susskind) postulates:
    • The maximum entropy within any region of space is bounded by its surface area, at 1 nat/(2 LP)2
      • Conjectural corollary: For any region of spacetime, its spacetime volume is equal to its total information content I, times the number of operations O performed within it.
        • Derivation: Vt ~ ARt ~ SEt ~ IO
          • V=volume, t=time, A=area, R=radius, S=max.ent., E=energy,
          • Constant of proportionality in this relation is IO = (1/2)(c7k/G2)Vt
  • The precision of spacetime measurements is limited; this suggests local limits to qubit sizes
    • See recent papers by Lloyd and Ng
  • Spacetime geometry could reflect the causal structure and energy distribution of an underlying quantum computation
    • Seth Lloyd, “A Theory of Quantum Gravity Based on Quantum Computation,” 2005.

M. Frank, "Physical Limits of Computing"