Furthermore, initial application tests were conducted on our created emotional social robot system, in which an emotional robot identified the emotions of eight volunteers through analysis of their facial expressions and bodily movements.
High-dimensional, noisy complex data is effectively addressed through deep matrix factorization, which shows great potential in dimensionality reduction. A novel, robust, and effective deep matrix factorization framework is presented in this article. This method enhances the effectiveness and robustness of single-modal gene data by constructing a dual-angle feature, thus resolving the issue of high-dimensional tumor classification. The framework proposed comprises three key components: deep matrix factorization, double-angle decomposition, and feature purification. To attain more stable classifications and superior feature extraction from noisy data, a robust deep matrix factorization (RDMF) model is proposed within the feature learning framework. To elaborate, a double-angle feature (RDMF-DA) results from the combination of RDMF features with sparse features, providing a more complete account of gene data. The third stage introduces a gene selection method built upon sparse representation (SR) and gene coexpression to refine feature representation using RDMF-DA, minimizing the impact of redundant genes. The proposed algorithm, in the final analysis, is utilized with gene expression profiling datasets, and the algorithm's performance is exhaustively confirmed.
Co-operative endeavors among various brain functional areas, as per neuropsychological studies, are the catalysts for high-level cognitive functions. To understand the brain's complex activity patterns within and between functional areas, we propose a novel neurologically-inspired graph neural network, LGGNet. LGGNet learns local-global-graph (LGG) EEG representations for use in brain-computer interfaces (BCI). The input layer of LGGNet is composed of temporal convolutions, utilizing multiscale 1-D convolutional kernels and a kernel-level attentive fusion process. Temporal EEG dynamics are captured and utilized as input to the proposed local and global graph filtering layers. Using a neurophysiologically pertinent set of local and global graphs, LGGNet models the multifaceted relationships within and among the brain's distinct functional regions. Employing a meticulous nested cross-validation strategy, the proposed technique is evaluated on three publicly accessible datasets for four categories of cognitive classification tasks: attention, fatigue, emotional recognition, and preference categorization. The performance of LGGNet is put to the test by comparing it against the top-performing approaches, DeepConvNet, EEGNet, R2G-STNN, TSception, RGNN, AMCNN-DGCN, HRNN, and GraphNet. LGGNet's performance surpasses that of the alternative methods, leading to statistically significant improvements in the majority of cases, according to the results. Prior neuroscience knowledge, integrated into neural network design, demonstrably enhances classification performance, as the results indicate. For the source code, please visit https//github.com/yi-ding-cs/LGG.
Tensor completion (TC) is a method for recovering missing entries in a tensor, dependent on the tensor's low-rank structure. Algorithms currently in use demonstrate strong performance characteristics in the presence of either Gaussian or impulsive noise. Generally, Frobenius norm-based approaches perform remarkably well under additive Gaussian noise conditions, but their recovery is significantly worsened when dealing with impulsive noise. While lp-norm algorithms (and their variations) perform well under conditions of gross error, they underperform Frobenius-norm-based methods when facing Gaussian noise. A robust method capable of handling both Gaussian and impulsive noise is therefore critical. To contain outliers in this work, we utilize a capped Frobenius norm, echoing the form of the truncated least-squares loss function. The normalized median absolute deviation is employed to automatically update the upper bound of our capped Frobenius norm during each iteration. Improving upon the lp-norm's performance with outlier-infused data, it reaches a comparable accuracy to the Frobenius norm without needing to fine-tune parameters, all within a Gaussian noise model. To render the non-convex problem tractable, we subsequently apply the half-quadratic theory to recast it as a multivariable problem, characterized by convex optimization with respect to each individual variable. biomedical waste The proximal block coordinate descent (PBCD) methodology is employed to address the resulting task, culminating in a proof of the proposed algorithm's convergence. CL316243 Convergence of the objective function's value is ensured alongside a subsequence of the variable sequence's convergence towards a critical point. The superiority of our method in terms of recovery performance, in comparison to established state-of-the-art algorithms, is demonstrated through experimentation involving real-world images and video footage. The code for completing tensors robustly in MATLAB is present at this GitHub page: https://github.com/Li-X-P/Code-of-Robust-Tensor-Completion.
Hyperspectral imagery anomaly detection, the process of distinguishing unusual pixels from the surrounding pixels using their unique spatial and spectral characteristics, has seen considerable growth in interest due to the versatility of its applications. This article introduces a novel hyperspectral anomaly detection algorithm, leveraging an adaptive low-rank transform. The algorithm segments the input hyperspectral image (HSI) into constituent tensors: background, anomaly, and noise. Humoral immune response To extract the maximum utility from spatial-spectral details, the background tensor is presented as the product of a transformed tensor and a low-rank matrix. To represent the spatial-spectral correlation of the HSI background, a low-rank constraint is applied to frontal slices of the transformed tensor. We also begin with a matrix having a predefined size, then minimize the l21-norm of this matrix, to produce an adaptable low-rank representation. By utilizing the l21.1 -norm constraint, the anomaly tensor's group sparsity of anomalous pixels is demonstrated. By integrating all regularization terms and a fidelity term, we formulate a non-convex problem, and we subsequently develop a proximal alternating minimization (PAM) algorithm for its resolution. As it turns out, the sequence generated by the PAM algorithm's methodology converges to a critical point. Analysis of four prevalent datasets using experimental procedures highlights the superior performance of the proposed anomaly detection approach compared to existing leading-edge techniques.
This article investigates the recursive filtering problem, targeting networked time-varying systems with randomly occurring measurement outliers (ROMOs). The ROMOs manifest as large-amplitude disturbances to the acquired measurements. Employing a collection of independent and identically distributed stochastic scalars, a fresh model is presented for the purpose of describing the dynamical behaviors of ROMOs. A probabilistic encoding-decoding method is utilized to transform the measurement signal into a digital representation. A novel recursive filtering method is developed to avoid performance degradation during the filtering process due to outlier measurements. Using an active detection approach, measurements affected by outliers are removed from the filtering algorithm. Minimizing the upper bound on the filtering error covariance motivates the proposed recursive calculation approach for deriving time-varying filter parameters. The stochastic analysis method is applied to analyze the uniform boundedness of the resultant time-varying upper bound of the filtering error covariance. To confirm the effectiveness and precision of our new filter design method, we present two numerical examples.
To elevate learning effectiveness, multi-party learning leverages the aggregation of data from diverse parties. Despite efforts, the direct merging of multi-party data proved incapable of upholding privacy standards, necessitating the emergence of privacy-preserving machine learning (PPML), a vital research subject within the field of multi-party learning. In spite of this, current PPML procedures typically fail to fulfill numerous requirements, including security, precision, efficiency, and the range of their usability. To resolve the problems mentioned earlier, this paper introduces a new PPML method, the multiparty secure broad learning system (MSBLS), which is built upon secure multiparty interactive protocols, along with a detailed security analysis. The proposed method, in particular, uses an interactive protocol and random mapping to produce the mapped dataset features, followed by training of the neural network classifier using efficient broad learning. This is the first instance, to the best of our knowledge, of a privacy computing method that simultaneously employs secure multiparty computation and neural networks. Hypothetically, this methodology maintains model accuracy despite encryption, and its computational speed is exceptionally rapid. To validate our conclusion, three classic datasets were employed.
Researchers examining heterogeneous information network (HIN) embedding-based recommendation techniques have encountered complexities. Unstructured attributes like text-based user and item summaries/descriptions contribute to data heterogeneity, presenting challenges in HIN applications. Within this article, we introduce SemHE4Rec, a novel recommendation method utilizing semantic-aware HIN embeddings to resolve these difficulties. Our SemHE4Rec model defines two embedding methods for the effective learning of user and item representations, considering their relations within a heterogeneous information network. User and item representations, rich in structure, are subsequently used to expedite the matrix factorization process. A fundamental component of the first embedding technique is a traditional co-occurrence representation learning (CoRL) model designed to learn the co-occurrence patterns of structural user and item features.