Linux利用cmatrix进行装逼

Linux利用cmatrix进行装逼

  1. 安装相关 ncurses 支持
yum install ncurses*
  1. 下载屏保软件源码包
wget https://jaist.dl.sourceforge.net/project/cmatrix/cmatrix/1.2a/cmatrix-1.2a.tar.gz
  1. 解压源码包
tar -zxvf cmatrix-1.2a.tar.gz
  1. 安装 gcc
yum -y install gcc
yum -y install gcc-c++
yum install make
  1. 进入源码包目录
cd cmatrix-1.2a/
  1. 释放编译文件
./configure --prefix=/opt/cmatrix/
  1. 使用明令
cmatrix [-abBflohnsVx] [-C color]

-a		异步滚动(m默认)
-b		随机粗体
-B		全部粗体
-f		force the linux $TERM type to be on
-l		Linux mode (sets "matrix.fnt" font in console)
-o		就风格滚动
-n		不使用粗体
-s		"Screensaver" mode,exits on first keystroke
-x		X window 模式
-u		刷新频率,0-9,也就是滚动的快慢
-C		显示的颜色,支持green(默认),red,blue,white,yellow,cyan
-h		获取帮助信息
-V		显示版本信息

在运行时, Ctrl + c 或 直接按 q 都可以直接退出

展示其中一个效果图

在这里插入图片描述

Introduction ============ This is a class for symmetric matrix related computations. It can be used for symmetric matrix diagonalization and inversion. If given the covariance matrix, users can utilize the class for principal component analysis(PCA) and fisher discriminant analysis(FDA). It can also be used for some elementary matrix and vector computations. Usage ===== It's a C++ program for symmetric matrix diagonalization, inversion and principal component anlaysis(PCA). To use it, you need to define an instance of CMatrix class, initialize matrix, call the public funtions, and finally, free the matrix. For example, for PCA, CMarix theMat; // define CMatrix instance float** C; // define n*n matrix C = theMat.allocMat( n ); Calculate the matrix (e.g., covariance matrix from data); float *phi, *lambda; // eigenvectors and eigenvalues int vecNum; // number of eigenvectors (<=n) phi = new float [n*vecNum]; lambda = new float [vecNum]; theMat.PCA( C, n, phi, lambda, vecNum ); delete phi; delete lambda; theMat.freeMat( C, n ); The matrix diagonalization function can also be applied to the computation of singular value decomposition (SVD), Fisher linear discriminant analysis (FLDA) and kernel PCA (KPCA) if forming the symmetric matrix appropriately. For data of very high dimensionality (n), the computation of nxn matrix is very expensive on personal computer. But if the number m of samples (vectors) is smaller than dimenionality, the problem can be converted to the computation of mxm matrix. The users are recommended to read the paper KPCA for how to form mxm matrix: B. Sch枚lkopf, A. Smola, K.-R. M眉ller. Nonlinear component analysis as a kernel eigenvalue problem, Neural Computation, 10(5): 1299-1319, 1998. Example ======= Refer to `example' directory for a simple demonstration.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值