patient information extraction in digitized x ray imagery
Download
Skip this Video
Download Presentation
Patient information extraction in digitized X-ray imagery

Loading in 2 Seconds...

play fullscreen
1 / 25

Patient information extraction in digitized X-ray imagery - PowerPoint PPT Presentation


  • 93 Views
  • Uploaded on

Patient information extraction in digitized X-ray imagery. Hsien-Huang P. Wu Department of Electrical Engineering, National Yunlin University of Science and Technology, 123 University Road, Section 3, Yunlin, Touliu, Taiwan, ROC

loader
I am the owner, or an agent authorized to act on behalf of the owner, of the copyrighted work described.
capcha
Download Presentation

PowerPoint Slideshow about 'Patient information extraction in digitized X-ray imagery' - niles


An Image/Link below is provided (as is) to download presentation

Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.


- - - - - - - - - - - - - - - - - - - - - - - - - - E N D - - - - - - - - - - - - - - - - - - - - - - - - - -
Presentation Transcript
patient information extraction in digitized x ray imagery

Patient information extraction in digitized X-ray imagery

Hsien-Huang P. Wu

Department of Electrical Engineering, National Yunlin University of Science and Technology,

123 University Road, Section 3, Yunlin, Touliu, Taiwan, ROC

Received 26 October 2001; received in revised form 21 August 2003; accepted 15 September 2003

abstract
Abstract
  • This paper presents a new method to extract the patient information number (PIN) field automatically from the film-scanned image using image analysis technique.
  • This extracted PIN information is linked with Radiology Information System or Hospital Information System and the image scanned from the film can be filed into database automatically.
  • We believe the success of this technique will benefit the development of the Picture Archiving and Communication System and teleradiology.
introduction
Introduction
  • Two disadvantage:
    • The first is that before the film was printed and scanned, the patients’ information (name, ID, etc.) had been recorded in Hospital Information System or Radiology Information System database when their radiograph was taken.
    • The user needs to enter patient’s information right before or after each scan in order to file the digitized image, therefore, batch scanning of the films is impossible.
patient information block search and extraction
Patient information block search and extraction
  • the patient information label is almost always attached to the corner of the film.

The reason this algorithm works is because human body is full of complicated texture and only the label area contains many

straight field lines.

label extraction
Label extraction
  • our next step is to remove all these extra details and leave only the area of the label.
  • The main technique used here is horizontal and vertical projection.
  • We accumulate (project) all the white pixels on each row in the image to form a vector PH; and collect (project) all the white pixels on each column in the image to form a vector PV.
  • given a M *N (M pixels by N lines) binary image with for white pixel and f for black pixel.
slide7
select two thresholds TV and TH
  • any position x for which is set to 0
  • any position y for which is set to 0
  • After this thresholding process, all the significant line segments are now stored as nonzero elements in PV and PH vectors. These nonzero elements will be used to identify the field separating lines.
  • left/right boundary corresponds to the first/last nonzero element of PV and top/bottom boundary corresponds to the first/last nonzero element of PH.
slide9
In the extracted region obtained above, there is extra white zone surrounding the label.
    • (1) Thresholding the image that we extracted above, and result in a binary image.
    • (2) From the center of the binary image, search to the left, right, top, and bottom for continuous white pixels.
    • (3) If there are more than three consecutive white pixels, we mark the one nearest to the center as the new position of the label boundary.
label orientation correction and pin field extraction
Label orientation correction and PIN field extraction
  • correct orientation is necessary to identify the PIN field, further processing is needed.
  • a blank label that is commonly used will be digitized, converted into binary image, and then save as a template.
  • One way to recognize its orientation is to correlate the scanned image with its template image in eight different known orientations.
  • From these eight different matching, the one with the highest matching score will indicate the real orientation of the input image.
slide13
To make the input image and the template image have the same resolution, length of the shorter side of the extracted label is normalized to have the same length as that of the template image.
  • the size of the label template is Xt *Yt
  • the label extracted from the input image is Xin *Yin
  • Y represents line number in the image, and X is the pixel number per line.
  • =>
  • =>
rotation
Rotation
  • The 90°-rotated condition is checked first.
  • To avoid the interference from the written content, the first field of the label is used for matching.
  • The vectors are used for checking if the scanned image needs a left/right flip correction.
  • The vectors are used for examining if the scanned label has been gone through a 180°-rotation.
slide16
The correlation coefficients for vertical and horizontal directions are in the range from -1 to 1.
  • In the extracted label image, if a 180°-rotation does exist, then the first field will become the last field.
  • These two subimages are projected horizontally to obtain two vectors
  • are computed between vectors and
  • are computed between vectors
  • the label image was scanned with a 180°-rotation, and correction is applied; otherwise, leave the image unchanged.
left right mirrored
Left-Right-mirrored
  • of the input image, and create a new vector
  • are computed between vectors
  • The correlation between are also computed for comparison.
  • If then the label image was scanned with a left–right-mirrored condition and correction by left–right flip is needed; otherwise, the image is left unchanged.
unconstrained handwritten numerals
Unconstrained handwritten numerals
  • Using a multilayer cluster neural network(MCNN)
  • The MCNN structure can achieve 97.1% recognition
  • The MCNN contains two phases : the training phase & the recognition phase
preprocessing
Preprocessing
  • (1) conversion of the input numeral to a binary image
  • (2) removal of spurious features by morphological

filtering

  • (3) vertical and horizontal spatial histograms are used to

close in on the region of numeral and the region is cut

out and resized to 16-by-16

feature extractor
Feature extractor
  • Kirsch edge detector is used in the feature extractor to detect directional line segments and generate feature maps.
  • Four directional feature maps for horizontal (H), vertical (V), right-diagonal (R), and left-diagonal (L) are created using eight Kirsch masks
classifier
Classifier
  • The classifier contains a three-layer cluster neural network with five independent sub-networks.
results
Results
  • In order to evaluate the PIN field extraction algorithm, two formats of the label acquired from two different hospitals are tested
  • (1) Image-blurring
  • (2) Too much tilting
  • (3) The labels are not positioned at the corner
ad