
Journals >Laser & Optoelectronics Progress
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 020501 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 020601 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 020602 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 021001 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 021003 (2019)
ing at the known deficiencies with complex training, strict parameter-tuning skills and experiences, difficult theoretical analysis of deep neural networks, an improved image classification algorithm with high training efficiency, strong interpretability and simple theoretical analysis is proposed, in which the principal component analysis network (PCANet) is used for feature extraction and the flat neural network (FNN) is for classification. In addition, the model parameters can be obtained by direct calculation and the flat neural network adaptively determines the number of nodes according to the training dataset. When the nodes increase, it is not necessary to retrain the model and only the parameters need to be adjusted locally to update the model. The experimental results show that the proposed model can acquire rapid training. Moreover, it possesses more competition in recognition accuracy compared with other unsupervised classification algorithms and traditional deep neural networks.
.- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 021004 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 021005 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 021101 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 021201 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 021202 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 021203 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 021204 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 021205 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 021401 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 021501 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 021502 (2019)
ing at the problem that the existing person re-identification algorithm cannot be adapted well to the variances of illumination, attitude and occlusion, a novel person re-identification algorithm based on feature fusion and subspace learning is proposed, in which the Histogram of Oriented Gradient (HOG) feature and the Hue-Saturation-Value (HSV) histogram feature are first extracted from the entire pedestrian image as the overall feature and then the Color Naming (CN) feature and the two-scale Scale Invariant Local Ternary Pattern (SILTP) feature are extracted in a sliding window. In addition, in order to make this algorithm have better scale invariance, the original images are first down-sampled twice and then the above features are extracted from the sampled images. After the features are extracted, a kernel function is used to transform the original feature space into a nonlinear space, in which a subspace is learned. Simultaneously, in this subspace, a similarity function is learned. The experiments on three public datasets are conducted and the results show that the proposed algorithm can be used to improve the re-identification rate relatively well.
.- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 021503 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 021601 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 021602 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 021603 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 021701 (2019)
ing at the problems of difficult feature extraction, poor classification accuracy, and less classification types in the remote sensing image multi-classification by the conventional methods, the feasibility of the convolutional neural network (CNN) model and the recognition effects of different CNN models are studied in the multi-classification recognition of hyperspectral remote sensing objects. The datasets are collected from Vaihingen provided by the international society for photogrammetry and remote sensing (ISPRS) and Google Earth. After the dataset-I containing six categories of ground objects is made, the dataset-II and dataset-III are made by adding ten and fourteen categories of ground objects, respectively. Through pre-processing image data, setting up network structures, adjusting model parameters, comparing network models, and so on, the classification accuracies of the above three datasets are all above 95%. By analyzing the influences of different CNN models on the multi-classification recognition of hyperspectral remote sensing objects, the feasibility and high recognition ability of CNN model in the multi-classification recognition of hyperspectral remote sensing are confirmed. The experimental results provide a certain reference for the application of CNN model in the multi-classification recognition of hyperspectral remote sensing objects.
.- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 021702 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 022201 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 022202 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 022401 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 020001 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 023001 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 023002 (2019)
- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 023003 (2019)
ing at the status of immaturity for the human-machine interaction technique of eye gaze tracking, a tabletop two-eye gaze tracking method is proposed based on pupil shape in space of stereo vision. With the low grey value distribution, the pupil center is located preliminarily. The radial derivative polar diagram in pupil area is used to extract the pupil edge point coordinates, and the random sample consensus (RANSAC) is used to fit the pupil edge with a suitable ellipse. The two-eye pupil edge point coordinates are matched using the ORB (Oriented brief) algorithm and the pupil edge point coordinates are obtained based on the two-eye stereo vision model. The least square method is finally adopted to calculate the pupil shape in space and the gaze direction is presented. The experimental results show that the positioning speed of pupil center is 300 frame/s, the two-eye gaze tracking speed is 15 frame/s, and the maximum gaze tracking error is 2.6°. It is verified that the proposed method has good accuracy, robustness and real-time performance, and it can be used in the field of human-machine interaction.
.- Publication Date: Jan. 16, 2019
- Vol. 56, Issue 2, 023301 (2019)