Restoring vision to the blind part ii what will the patients see
Download
1 / 52

Augusta PowerPoint file part B - PowerPoint PPT Presentation


  • 259 Views
  • Uploaded on

Restoring vision to the blind Part II: What will the patients see?. Gislin Dagnelie, Ph.D. Lions Vision Research & Rehabilitation Ctr Wilmer Eye Institute Johns Hopkins Univ Sch of Medicine Department of Veterans Affairs Rehabilitation Center Augusta, GA April 15, 2005. Lines of attack.

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Augusta PowerPoint file part B' - Jeffrey


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
Restoring vision to the blind part ii what will the patients see l.jpg

Restoring vision to the blindPart II: What will the patients see?

Gislin Dagnelie, Ph.D.

Lions Vision Research & Rehabilitation Ctr

Wilmer Eye Institute

Johns Hopkins Univ Sch of Medicine

Department of Veterans Affairs Rehabilitation Center

Augusta, GA

April 15, 2005


Lines of attack l.jpg
Lines of attack

  • Systems engineering (“brute force” or maybe just pragmatic)

  • Electrode/tissue engineering (“remodeling the interface”)

  • Likely limitations (space and time)

  • (Low) vision science/rehab


Spatial limits retinal rewiring robert marc l.jpg
Spatial limits: retinal rewiringRobert Marc

  • Ultrastructural evidence from donor RP/AMD retinas:

    • Extensive rewiring of inner retinal cells

    • Neurite processes spread over long distances (~300 μm)

    • Glial cells migrate into choroid

  • Injected electrical current may spread through neurite tangle

Marc RE, Progr in Retin Eye Res 22:607-655 (2003)


Spatial limits implications of retinal rewiring l.jpg
Spatial limits: implications of retinal rewiring

  • Stimulating degenerated retina may be like writing on tissue paper with a fountain pen:

    • Charge diffusion over distances up to 1o

    • Phosphenes likely to be blurry (Gaussian blobs), not sharp

    • Minor effect if electrodes are widely spaced (>= 2o)

    • Phosphenes from closely spaced electrodes may overlap/fuse

      Retinal prosthetic vision may be pretty blurry…


Temporal limits persistence humayun et al l.jpg
Temporal limits: persistenceHumayun et al.

  • Single electrode, acute testing:

    • Flicker fusion occurs at 25-40 Hz

  • Multi-electrode implant testing:

    • Rapid changes are hard to detect

    • Flicker fusion at lower frequency?

      Maybe prosthetic vision will be not just blurry, but also streaky…


And then there is background noise many blind rp patients see flashes like this l.jpg
And then there is background noise:Many blind RP patients see “flashes” like this…




Or even this9 l.jpg
…or even this ! look like this…


Caution l.jpg
Caution look like this…

It is naïve to expect that we will

implant a retinal prosthesis,

turn on the camera,

and just send the patient home to practice


Lines of attack11 l.jpg
Lines of attack look like this…

  • Systems engineering (“brute force” or maybe just pragmatic)

  • Electrode/tissue engineering (“remodeling the interface”)

  • Likely limitations (space and time)

  • (Low) vision science/rehab


Daily activities how many dots do they take l.jpg
Daily activities: look like this…How many dots do they take?…


Developing an implantable prosthesis l.jpg
Developing an implantable prosthesis look like this…

  • How does it work?

  • Why should it work?

  • What did blind patients see in the OR?

  • What do the first implant recipients tell us?

  • What could the future look like?

  • What’s up next?


Simulation techniques l.jpg
Simulation techniques look like this…

  • “Pixelized” images shown to normally-sighted and low vision observers wearing video headset

  • Images are gray-scale only, no color

  • Layout of dots in crude raster, similar to (current and anticipated) retinal implants

  • Subject scans raster across underlying image through:

    • Mouse/cursor movement, or

    • Head movement (camera or head tracker)


Performance under idealized conditions l.jpg
Performance under “idealized” conditions look like this…

Subjects performed the following tasks:

  • Use live video images to perform “daily activities”

  • Walk around an office floor

  • Discriminate a face in 4 alternative forced choice

  • Read meaningful text


Live test candy pour 16x16 l.jpg
Live test: candy pour, 16x16 look like this…


Live test mobility l.jpg
Live test: Mobility look like this…


Live test mobility 6x10 l.jpg
Live test: Mobility, 6x10 look like this…


Live test spoon in 4x4 view l.jpg
Live test: spoon in 4x4 view look like this…



Face identification methods l.jpg
Face identification: Methods look like this…

  • 4 groups (M/F, B/W) of 15 models (Y/M/O, 5 each)

  • Face width 12º

  • Parameters (varied one by one from standard):

    • Dot size: 23-78 arcmin

    • Gap size: 5-41 arcmin

    • Grid size: 10X10, 16X16, 25X25, 32X32

    • Random dropout: 10%, 30%, 50%, 70%

    • Gray levels: 2, 4, 6, 8

  • Tests performed at 98% and 13% contrast

  • Each parameter combination presented 6 times

  • Data from 4 normally-sighted subjects


Face identification dot size l.jpg
Face identification: Dot look like this… size


Face identification dot spacing l.jpg
Face identification: Dot look like this… spacing


Face identification grid size l.jpg
Face identification: look like this…Grid size


Face identification dropout percentage l.jpg
Face identification: look like this…Dropout percentage


Face identification gray levels l.jpg
Face identification: look like this…Gray levels


Face identification summary l.jpg
Face identification: Summary look like this…

  • Performance well above chance, except for:

    • large dots and/or gaps (i.e., <6 c/fw)

    • small grid or small dots (< 0.5 fw)

    • >50% drop-out

    • <4 gray levels

  • Low contrast does not seriously reduce performance

  • Significant between-subject variability (unfamiliar task?)


Reading test procedure l.jpg
Reading test: Procedure look like this…


Reading test sample clips l.jpg
Reading test: Sample clips look like this…


Reading test methods l.jpg
Reading test: Methods look like this…

  • Novel, meaningful text; grade 6 level

  • Scored for reading rate and accuracy

  • Font size 31, 40, 50, 62 points (2-4º characters)

  • Parameters (varied separately from standard):

    • Dot size: 23-78 arcmin

    • Gap size: 5-41 arcmin

    • Grid size: 10X10, 16X16, 25X25, 32X32

    • Random dropout: 10%, 30%, 50%, 70%

    • Gray levels: 2, 4, 6, 8

  • Tests performed at 98% and 13% contrast


Reading speed font size l.jpg
Reading speed: Font size look like this…


Reading speed dot size l.jpg
Reading speed: Dot size look like this…


Reading speed dot spacing l.jpg
Reading speed: Dot spacing look like this…


Reading speed grid size l.jpg
Reading speed: Grid size look like this…



Reading speed gray levels l.jpg
Reading speed: Gray levels look like this…


Reading test summary l.jpg
Reading test: Summary look like this…

  • Reading adequate, but drops off for:

    • Small fonts (<6 dots/char)

    • Small grid (plateau beyond 25X25 dots)

    • >30% drop-out (esp. low contrast)

    • Note: even 2 gray levels adequate

  • Low contrast reduces performance, but reading still adequate

  • Much less intersubject variability than for face identification (familiar task?)


Introducing virtual reality l.jpg
Introducing Virtual Reality look like this…

  • Flexible tasks:

    • Object and maze properties can be varied “endlessly”

    • Difficulty level can be adjusted (even automatically)

  • Precise response measures:

    • Subjects’ actions can be logged automatically

    • Constant response criteria can be built in

  • It’s safe!


Virtual mobility task l.jpg
Virtual mobility task look like this…

  • Ten different “floor plans” in a virtual building

  • Pixelized and stabilized view, 6x10 dots

  • Drop-out percentage and dynamic noise varied

  • Use cursor keys to maneuver through 10 rooms




Prosthetic vision simulations visual inspection coordination l.jpg
Prosthetic vision simulations: look like this…Visual inspection/coordination

Playing checkers:

A challenge for visually guided performance


Introducing eye movements l.jpg
Introducing Eye Movements look like this…

  • Until now, free viewing conditions:

    • Subject can scan eye across dot raster

    • Mouse or camera movement used to scan raster across scene

  • Electrodes will be stabilized on the retina:

    • When the eyes move, dots move along

    • Mouse or camera used to move scene “behind” dots

  • Tough task !


Video pair face identification task free viewing vs gaze locked l.jpg
Video pair: Face identification task look like this…Free-viewing vs. gaze-locked


Face identification free viewing vs gaze locked learning l.jpg
Face identification, free-viewing vs. gaze-locked: Learning look like this…

FV= free viewing, FX= fixation controlled


Video pair reading task free viewing vs gaze locked l.jpg
Video pair: Reading task look like this…Free-viewing vs. gaze-locked


Prosthetic vision simulations low vision science l.jpg
Prosthetic vision simulations: look like this…Low Vision Science

  • Reading with pixelized vision, stabilized vs. free-viewing:

    • Accuracy falls off a little sooner, and reading rate is 5x lower, BUT

    • Spatial processing properties (dots/charwidth and char/window drop-off) do not change

    • At low contrast, window restriction more severe (not shown)


Prosthetic vision simulations rehabilitation l.jpg
Prosthetic vision simulations: look like this…Rehabilitation

  • Learning makes all the difference:

    • Accuracy increases over time, both for high and for low contrast

    • Reading speed increases over time, for high and low contrast

    • Stabilized reading takes longer to learn, but improves relative to free viewing, both in accuracy and speed


So what s the use of simulations l.jpg
So what’s the use of simulations? look like this…

Simulating prosthetic vision can help in:

  • Determining requirements for vision tasks

  • Exploring and understanding wearers’ reports

  • Helping to find solutions for wearers’ problems

  • Conveying the “prosthetic experience” to clinicians and public

    AND:

  • Designing rehabilitation programs to help future prosthesis recipients


Functional prosthetic vision how far off l.jpg
Functional prosthetic vision: look like this…How far off ?

  • Our subjects perform quite well with 16X16 (or more) electrodes

  • They can learn to perform most tasks with 6X10

  • They can learn to avoid obstacles with 4X4

  • Typical daily living activities will require larger numbers of electrodes (at least 10X10), and intensive rehabilitation


Conclusion l.jpg
Conclusion look like this…

Prosthetic vision is not just a technological challenge:

It promises to bring new areas of vision research and rehabilitation

http://lions.med.jhu.edu/lvrc/gd.htm


Towards artificial sight a long exciting road ahead l.jpg

Simulations supported by look like this…:

National Eye Institute and Foundation Fighting Blindness

Special thanks to:

  • Anna Cronin-Scott

  • Paul Dagnelie

  • Chris De Marco, Ph.D.

  • Jasmine Hayes

  • Pearse Keane

  • Wentai Liu, Ph.D.

  • Laura Martin

  • Kathy Turano, Ph.D.

  • Matthias Walter

  • Vivian Yin

  • Second Sight, LLC

Towards artificial sight: A long, exciting road ahead!


ad