Beltwide Cotton Conference January 11-12, 2007 New Orleans, Louisiana. BIMODALITY OF COMPACT YARN HAIRINESS. Jiří Militký , Sayed Ibrahim and Dana k řemenaková Technical University of Liberec, 46117 Liberec, Czech Republic. Introduction.
Download Policy: Content on the Website is provided to you AS IS for your information and personal use and may not be sold / licensed / shared on other websites without getting consent from its author.While downloading, if for some reason you are not able to download a presentation, the publisher may have deleted the file from their server.
Hairiness is considered as sum of the fibre ends and loops standing out from the main compact yarn body
The most popular instrument is the Uster hairiness system, which characterizes the hairiness by H value, and is defined as the total length of all hairs within one centimeter of yarn.
The system introduced by Zweigle, counts the number of hairs of defined lengths. The S3 gives the number of hairs of 3mm and longer.
The information obtained from both systems are limited, and the available methods either compress the data into a single vale H or S3, convert the entire data set into a spectrogram deleting the important spatial information.
Other less known instruments such as Shirley hairiness meter or F-Hairmeter give very poor information about the distribution characteristics of yarn hairiness.
Some laboratory systems dealing with image processing, decomposing the hairiness into two exponential functions (Neckar,s Model), this method is time consuming, dealing with very short lengths.
1a) Condensing element
1b) Perforated apron
VZ Condensing zone
2) Yarn Balloon with new Structure
5) Spindle, 6) Ring carriage
7) Cop, 8) Balloon limiter
9) Yarn guide, 10) Roving
E) Spinning triangle of compact spinning
Number and width of bars affect the shape of the probability distribution
The question is how to optimize the width of bars for better evaluation?
Histogram (83 columns)
Normal Dist. fit
Gaussian curve fit (20 columns)
Smooth curve fit
h = 0.133
The Kernel type nonparametric of
sample probability density function
Kernel function : bi-quadratic
- symmetric around zero
- properties of PDF
Optimalbandwidth : h
1. Based on the assumptions of near normality
2. Adaptive smoothing
3. Exploratory (local hj ) requirement
of equal probability in all classes
h = 0.1278
MATLAB 7.1 RELEASE 14
The bi-modal distribution can be approximated by two
Where , are proportions of shorter and longer hair distribution respectively, , are the means and , are the standard deviations.
H-yarn Program written in Matlab code, using the least square method is used for estimating these parameters.
The frequency distribution and fitted bimodal distribution curve
estimation and likelihood
between modes (Separation)
unimodal Gaussian smoother closest to the
x and the closest bimodal Gaussian smoother
In general, the Dip test is for bimodality. However, mixture of two distributions does not necessarily result in a bimodal distribution.
Probabilitydensity function (PDF) f (x),
Cumulative Distribution Function (CDF) F (x),
and Empirical CDF (ECDF) Fn(x)
Unimodal CDF: convex in (−∞, m), concave in [m, ∞)
Bimodal CDF: one bump
Let G∗ = arg min supx |Fn(x) − G(x)|, where
G(x) is a unimode CDF.
Dip Statistic: d = supx |Fn(x) − G∗(x)|
Dip Statistic (for n= 18500): 0.0102
Critical value (n = 1000): 0.017
Critical value (n = 2000): 0.0112
Dip test statistics:
It is the largest vertical difference between the empirical cumulative distribution FE and the Uniform distribution FU
Points A and B are modes, shaded areas C,D are bumps, area E and F is a shoulder point
This test is actually identification of mixed mixture of normal distribution, is only rejecting unimodality
The single normal distribution model (μ,σ), the likelihood function is:
Where the data set contains n observations.
The mixture of two normal distributions, assumes that each data point belongs to one of tow sub-population. The likelihood of this function is given as:
The likelihood ratio can be calculated from Lu and Lb as follows:
Analysis of Results V
Analysis of Results VI
Kernel density estimator: Adaptive Kernel Density Estimator for univariate data. (choice of band width h determines the amount of smoothing. If a long tailed distribution, fixed band width suffer from constant width across the entire sample. For very small band width an over smoothing may occur )
MATLAB AKDEST 1D- evaluates the univariate Adaptive Kernel Density Estimate with kernel
For characterization of independence hypothesis against periodicity alternative the cumulative periodogram C(fi) can be constructed.
For white noise series (i.i.d normally distributed data), the plot of C(fi) against fiwould be scattered about a straight line joining the points (0,0) and (0.5,1). Periodicities would tend to produce a series of neighboring values of I(fi) which were large. The result of periodicities therefore bumps on the expected line. The limit lines for 95 % confidence interval of C(fi) are drawn at distances.
Simply the Autocorrelation function is a comparison of a signal with itself as a function of time shift.
Autocorrelation coefficient of first order R(1) can be evaluatedas
For sufficiently high L is first order autocorrelation equal to zero
The Fast Fourier Transformation is used to transform from time domain to frequency domain and back again is based on Fourier transform and its inverse. There are many types of spectrum analysis, PSD, Amplitude spectrum, Auto regressive frequency spectrum, moving average frequency spectrum, ARMA freq. Spectrum and many other types are included in Hyarn program.
The cumulative of white Identically Distribution noise is known as Brownian motion or a random walk. The Hurst exponent is a good estimator for measuring the fractal dimension. The Hurst equation is given as . The parameter H is the Hurst exponent.
The fractal dimension can be measured by 2-H. In this case the cumulative of white noise will be 1.5. More useful is expressing the fractal dimension 1/H using probability space rather than geometrical space.