What I have done

1、给Firfox更新组件
  工具->附加组件->查找更新,在Mozilla网站上查找合适的组件下载。
2、给面板的“位置”栏添加书签
  使用文件浏览器的 书签->添加书签 操作
3、给ls创建别名
  打开~/.bashrc文件,找到相关语句,并添加下面一行命令
  $ alias ll='ls -al'
4、把终端加到右键菜单中
  $ sudo apt-get install nautilus-open-terminal
5、消除终端响铃和系统滴滴声
  打开终端, 编辑->当前配置文件->常规,禁止终端响铃
  系统->首选项->音效, 关闭系统响铃
6、删除ubuntu8.04自带的文档编辑器下载,并foxit reader代替evince
  $ sudo apt-get remove evince
7、安装gcc/g++/make, gdb等
  $ sudo apt-get install build-essential
8、安装rar,unrar
  $ sudo apt-get install rar unrar
9、bash的配置文件是$HOME/.bashrc(或是.bash_profile、.bash_login)
  vim的配置文件是$HOME/.vimrc
10、查看swap分区使用情况
  $ free -m
11、vim配置
  $ sudo apt-get install vim
  //从网上下载了一个编写的很全的配置文件包vimrc_easwy.zip,里面包含.vimrc文件和.vim文件夹,其中.vimrc是vim的配置文件而.vim是vim的插件安装的地方。
  $ cp Resources/ubuntu/vim/vimrc_easwy.zip .    //将下载的vimrc_easwy.zip拷贝到主文件夹下,"."此时表示是主文件夹。
  $ unzip vimrc_easwy.zip                //解压缩vimrc_easwy.zip,此时会产生.vimrc文件和.vim文件夹到主文件夹下。
  //上面的.vimrc文件只是一般的对vim的配置,但没有对C的更好的支持,需要安装c.vim插件(下载的安装包是cvim.zip)
  $ cp Resources/ubuntu/cvim.zip .vim        //将下载的cvim.zip拷贝到~/.vim文件夹下进行解压缩。
  $ cd .vim
  $ unzip cvim.zip                        //解压缩之后,cvim.zip里面的文件就装到了.vim里面合适的文件夹下
  //现在vim的设置比较好了,如果想将vim打造成一个更好的IDE,还可以继续完善设置。
  //下载vim的中文手册,并按照里面INSTALL的说明安装文档。要学会使用vim的帮助手册
12、安装C标准库的man手册
  $ sudo apt-get install manpages
  $ sudo apt-get install manpages-dev
13、更新firefox的flashplayer组件
  //下载最新版的install_flash_player_10_linux.tar.gz,解压缩之后产生一个libflashplayer.so
  //将libflashplayer.so拷贝到/usr/lib/adobe-flashplugin目录下
  $ sudo cp libflashplayer.so /usr/lib/adobe-flashplugin/      
  //进入/etc/alternatives目录下,并建立到/usr/lib/adobe-flashplugin/libflashplayer.so的符号连接
  $ cd /etc/alternatives
  $ sudo ln -s /usr/lib/adobe-flashplugin/libflashplayer.so firefox-flashplugin
  //进入/usr/lib/firefox/plugins目录下,并建立到/etc/alternatives/firefox-flashplugin的符号连接,如果已存在flashplugin-alternative.so的符号连接,重定义该连接
  $ sudo ln -s /etc/alternatives/firefox-flashplugin flashplugin-alternative.so
14、安装yacc和flex
  $ sudo apt-get install flex bison
15、字体选用Bitstream Charter

16、安装chm阅读器
  $ sudo apt-get install xchm

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
During training, Dropout layers are used to randomly drop out some of the neurons in the network, which helps to prevent overfitting and improve generalization performance. However, during prediction, we don't want to randomly drop out neurons because we want to make a deterministic prediction. To use Monte Carlo Dropout for making predictions, we need to modify the model by applying Dropout layers at prediction time. This can be done by setting the Dropout probability to zero during prediction, effectively deactivating the Dropout layer. Then, we can run the model multiple times with different random Dropout masks to obtain a distribution of predictions, which can be used to estimate the uncertainty of the predictions. In TensorFlow, we can achieve Monte Carlo Dropout by creating a new model that is identical to the original model, but with the Dropout layers modified to have a different behavior during prediction. This can be done by creating a custom Dropout layer that overrides the `call()` method to apply the Dropout probability only during training, and to deactivate the Dropout layer during prediction. The modified model can then be used to make predictions by running it multiple times with different random Dropout masks. Here is an example of how to implement Monte Carlo Dropout in TensorFlow: ``` import tensorflow as tf # Define custom Dropout layer for Monte Carlo Dropout class MonteCarloDropout(tf.keras.layers.Dropout): def call(self, inputs): if not self.training: return inputs return super().call(inputs) # Define original model with Dropout layers model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D((2, 2)), tf.keras.layers.Dropout(0.2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), tf.keras.layers.Dropout(0.5), tf.keras.layers.Dense(10) ]) # Create modified model with Monte Carlo Dropout mc_model = tf.keras.Sequential([ tf.keras.layers.Conv2D(32, (3, 3), activation='relu', input_shape=(28, 28, 1)), tf.keras.layers.MaxPooling2D((2, 2)), MonteCarloDropout(0.2), tf.keras.layers.Flatten(), tf.keras.layers.Dense(128, activation='relu'), MonteCarloDropout(0.5), tf.keras.layers.Dense(10) ]) # Train original model with Dropout layers model.compile(optimizer='adam', loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True), metrics=['accuracy']) model.fit(train_images, train_labels, epochs=10) # Use Monte Carlo Dropout to make predictions with modified model predictions = [] for i in range(100): predictions.append(mc_model.predict(test_images, training=True)) predictions = tf.stack(predictions) mean_prediction = tf.math.reduce_mean(predictions, axis=0) var_prediction = tf.math.reduce_variance(predictions, axis=0) ``` In this example, we define a custom Dropout layer `MonteCarloDropout` that overrides the `call()` method to deactivate the Dropout layer during prediction. We then create a modified model `mc_model` that is identical to the original model, but with the Dropout layers replaced by `MonteCarloDropout` layers. We train the original model with Dropout layers using the `fit()` method. To make predictions with Monte Carlo Dropout, we run the modified model `mc_model` multiple times with different random Dropout masks by setting the `training` argument to `True`. We then stack the predictions into a tensor and compute the mean and variance of the predictions across the different runs. The mean prediction represents the estimated class probabilities, while the variance represents the uncertainty of the predictions.
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值